2025-05-26 03:00:46.963758 | Job console starting 2025-05-26 03:00:46.972921 | Updating git repos 2025-05-26 03:00:47.281427 | Cloning repos into workspace 2025-05-26 03:00:47.633476 | Restoring repo states 2025-05-26 03:00:47.680591 | Merging changes 2025-05-26 03:00:47.680604 | Checking out repos 2025-05-26 03:00:48.237275 | Preparing playbooks 2025-05-26 03:00:49.442984 | Running Ansible setup 2025-05-26 03:00:54.500149 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-26 03:00:55.225990 | 2025-05-26 03:00:55.226125 | PLAY [Base pre] 2025-05-26 03:00:55.243154 | 2025-05-26 03:00:55.243279 | TASK [Setup log path fact] 2025-05-26 03:00:55.264011 | orchestrator | ok 2025-05-26 03:00:55.285499 | 2025-05-26 03:00:55.285638 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-26 03:00:55.325498 | orchestrator | ok 2025-05-26 03:00:55.337705 | 2025-05-26 03:00:55.337827 | TASK [emit-job-header : Print job information] 2025-05-26 03:00:55.388203 | # Job Information 2025-05-26 03:00:55.388366 | Ansible Version: 2.16.14 2025-05-26 03:00:55.388402 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-05-26 03:00:55.388436 | Pipeline: periodic-daily 2025-05-26 03:00:55.388461 | Executor: 521e9411259a 2025-05-26 03:00:55.388484 | Triggered by: https://github.com/osism/testbed 2025-05-26 03:00:55.388508 | Event ID: c48be65eb93e49be8bdebddad8f68175 2025-05-26 03:00:55.396766 | 2025-05-26 03:00:55.396892 | LOOP [emit-job-header : Print node information] 2025-05-26 03:00:55.532409 | orchestrator | ok: 2025-05-26 03:00:55.532645 | orchestrator | # Node Information 2025-05-26 03:00:55.532681 | orchestrator | Inventory Hostname: orchestrator 2025-05-26 03:00:55.532706 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-26 03:00:55.532801 | orchestrator | Username: zuul-testbed03 2025-05-26 03:00:55.532833 | orchestrator | Distro: Debian 12.11 2025-05-26 03:00:55.532869 | orchestrator | Provider: static-testbed 2025-05-26 03:00:55.532892 | orchestrator | Region: 2025-05-26 03:00:55.532913 | orchestrator | Label: testbed-orchestrator 2025-05-26 03:00:55.532933 | orchestrator | Product Name: OpenStack Nova 2025-05-26 03:00:55.532953 | orchestrator | Interface IP: 81.163.193.140 2025-05-26 03:00:55.553273 | 2025-05-26 03:00:55.553382 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-26 03:00:56.031763 | orchestrator -> localhost | changed 2025-05-26 03:00:56.038717 | 2025-05-26 03:00:56.038806 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-26 03:00:57.117011 | orchestrator -> localhost | changed 2025-05-26 03:00:57.133656 | 2025-05-26 03:00:57.133771 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-26 03:00:57.415267 | orchestrator -> localhost | ok 2025-05-26 03:00:57.422345 | 2025-05-26 03:00:57.422463 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-26 03:00:57.443235 | orchestrator | ok 2025-05-26 03:00:57.463071 | orchestrator | included: /var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-26 03:00:57.472826 | 2025-05-26 03:00:57.472956 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-26 03:00:58.287073 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-26 03:00:58.287352 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/work/83c4ae87e4be4185b05ca966758d4263_id_rsa 2025-05-26 03:00:58.287394 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/work/83c4ae87e4be4185b05ca966758d4263_id_rsa.pub 2025-05-26 03:00:58.287591 | orchestrator -> localhost | The key fingerprint is: 2025-05-26 03:00:58.287625 | orchestrator -> localhost | SHA256:13c28kKmuR/nZIopmGN4t5Gu3DWvgUwEn2WCvfGHNyg zuul-build-sshkey 2025-05-26 03:00:58.287650 | orchestrator -> localhost | The key's randomart image is: 2025-05-26 03:00:58.287686 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-26 03:00:58.287709 | orchestrator -> localhost | | .o. o | 2025-05-26 03:00:58.287732 | orchestrator -> localhost | | .oo= | 2025-05-26 03:00:58.287752 | orchestrator -> localhost | | ++ o | 2025-05-26 03:00:58.287772 | orchestrator -> localhost | | .E = + | 2025-05-26 03:00:58.287792 | orchestrator -> localhost | | S.o ++o.o| 2025-05-26 03:00:58.287822 | orchestrator -> localhost | | o.o =.oo.| 2025-05-26 03:00:58.287876 | orchestrator -> localhost | | . o= * o = | 2025-05-26 03:00:58.287900 | orchestrator -> localhost | | ..*ooo B O | 2025-05-26 03:00:58.287922 | orchestrator -> localhost | | oo++o=o+ . | 2025-05-26 03:00:58.287943 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-26 03:00:58.288010 | orchestrator -> localhost | ok: Runtime: 0:00:00.311178 2025-05-26 03:00:58.298793 | 2025-05-26 03:00:58.298965 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-26 03:00:58.332263 | orchestrator | ok 2025-05-26 03:00:58.347990 | orchestrator | included: /var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-26 03:00:58.360522 | 2025-05-26 03:00:58.360696 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-26 03:00:58.386879 | orchestrator | skipping: Conditional result was False 2025-05-26 03:00:58.398916 | 2025-05-26 03:00:58.399042 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-26 03:00:59.030293 | orchestrator | changed 2025-05-26 03:00:59.037475 | 2025-05-26 03:00:59.037601 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-26 03:00:59.322404 | orchestrator | ok 2025-05-26 03:00:59.332282 | 2025-05-26 03:00:59.332421 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-26 03:00:59.773770 | orchestrator | ok 2025-05-26 03:00:59.782731 | 2025-05-26 03:00:59.782907 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-26 03:01:00.356008 | orchestrator | ok 2025-05-26 03:01:00.372013 | 2025-05-26 03:01:00.372153 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-26 03:01:00.400116 | orchestrator | skipping: Conditional result was False 2025-05-26 03:01:00.426302 | 2025-05-26 03:01:00.426624 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-26 03:01:01.046153 | orchestrator -> localhost | changed 2025-05-26 03:01:01.082143 | 2025-05-26 03:01:01.082304 | TASK [add-build-sshkey : Add back temp key] 2025-05-26 03:01:01.556793 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/work/83c4ae87e4be4185b05ca966758d4263_id_rsa (zuul-build-sshkey) 2025-05-26 03:01:01.557105 | orchestrator -> localhost | ok: Runtime: 0:00:00.014778 2025-05-26 03:01:01.589614 | 2025-05-26 03:01:01.589901 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-26 03:01:02.045567 | orchestrator | ok 2025-05-26 03:01:02.052221 | 2025-05-26 03:01:02.052362 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-26 03:01:02.087155 | orchestrator | skipping: Conditional result was False 2025-05-26 03:01:02.151955 | 2025-05-26 03:01:02.152115 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-26 03:01:02.600429 | orchestrator | ok 2025-05-26 03:01:02.625875 | 2025-05-26 03:01:02.626028 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-26 03:01:02.693036 | orchestrator | ok 2025-05-26 03:01:02.700927 | 2025-05-26 03:01:02.701063 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-26 03:01:03.117473 | orchestrator -> localhost | ok 2025-05-26 03:01:03.131199 | 2025-05-26 03:01:03.131384 | TASK [validate-host : Collect information about the host] 2025-05-26 03:01:04.648562 | orchestrator | ok 2025-05-26 03:01:04.666095 | 2025-05-26 03:01:04.666255 | TASK [validate-host : Sanitize hostname] 2025-05-26 03:01:04.726097 | orchestrator | ok 2025-05-26 03:01:04.731959 | 2025-05-26 03:01:04.732080 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-26 03:01:05.395602 | orchestrator -> localhost | changed 2025-05-26 03:01:05.402796 | 2025-05-26 03:01:05.402987 | TASK [validate-host : Collect information about zuul worker] 2025-05-26 03:01:05.919014 | orchestrator | ok 2025-05-26 03:01:05.927794 | 2025-05-26 03:01:05.927958 | TASK [validate-host : Write out all zuul information for each host] 2025-05-26 03:01:06.613889 | orchestrator -> localhost | changed 2025-05-26 03:01:06.633646 | 2025-05-26 03:01:06.636228 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-26 03:01:06.940880 | orchestrator | ok 2025-05-26 03:01:06.952963 | 2025-05-26 03:01:06.953124 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-26 03:01:28.609753 | orchestrator | changed: 2025-05-26 03:01:28.610029 | orchestrator | .d..t...... src/ 2025-05-26 03:01:28.610082 | orchestrator | .d..t...... src/github.com/ 2025-05-26 03:01:28.610120 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-26 03:01:28.610151 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-26 03:01:28.610177 | orchestrator | RedHat.yml 2025-05-26 03:01:28.625443 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-26 03:01:28.625464 | orchestrator | RedHat.yml 2025-05-26 03:01:28.625560 | orchestrator | = 1.53.0"... 2025-05-26 03:01:43.228427 | orchestrator | 03:01:43.228 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-26 03:01:43.309343 | orchestrator | 03:01:43.309 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-26 03:01:44.948963 | orchestrator | 03:01:44.948 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-26 03:01:46.236618 | orchestrator | 03:01:46.236 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-26 03:01:47.644820 | orchestrator | 03:01:47.644 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-26 03:01:48.782352 | orchestrator | 03:01:48.782 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-26 03:01:49.706007 | orchestrator | 03:01:49.705 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-26 03:01:50.682570 | orchestrator | 03:01:50.682 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-26 03:01:50.682709 | orchestrator | 03:01:50.682 STDOUT terraform: Providers are signed by their developers. 2025-05-26 03:01:50.682730 | orchestrator | 03:01:50.682 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-26 03:01:50.682743 | orchestrator | 03:01:50.682 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-26 03:01:50.682755 | orchestrator | 03:01:50.682 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-26 03:01:50.682785 | orchestrator | 03:01:50.682 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-26 03:01:50.682809 | orchestrator | 03:01:50.682 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-26 03:01:50.682828 | orchestrator | 03:01:50.682 STDOUT terraform: you run "tofu init" in the future. 2025-05-26 03:01:50.683380 | orchestrator | 03:01:50.683 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-26 03:01:50.683438 | orchestrator | 03:01:50.683 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-26 03:01:50.683486 | orchestrator | 03:01:50.683 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-26 03:01:50.683500 | orchestrator | 03:01:50.683 STDOUT terraform: should now work. 2025-05-26 03:01:50.683554 | orchestrator | 03:01:50.683 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-26 03:01:50.683607 | orchestrator | 03:01:50.683 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-26 03:01:50.683645 | orchestrator | 03:01:50.683 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-26 03:01:50.877997 | orchestrator | 03:01:50.877 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-26 03:01:51.077635 | orchestrator | 03:01:51.077 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-26 03:01:51.077745 | orchestrator | 03:01:51.077 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-26 03:01:51.077910 | orchestrator | 03:01:51.077 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-26 03:01:51.077933 | orchestrator | 03:01:51.077 STDOUT terraform: for this configuration. 2025-05-26 03:01:51.303363 | orchestrator | 03:01:51.303 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-26 03:01:51.410897 | orchestrator | 03:01:51.410 STDOUT terraform: ci.auto.tfvars 2025-05-26 03:01:51.415780 | orchestrator | 03:01:51.415 STDOUT terraform: default_custom.tf 2025-05-26 03:01:51.614408 | orchestrator | 03:01:51.614 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-26 03:01:52.628223 | orchestrator | 03:01:52.628 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-26 03:01:53.269301 | orchestrator | 03:01:53.268 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-26 03:01:53.475725 | orchestrator | 03:01:53.475 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-26 03:01:53.475824 | orchestrator | 03:01:53.475 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-26 03:01:53.475830 | orchestrator | 03:01:53.475 STDOUT terraform:  + create 2025-05-26 03:01:53.475855 | orchestrator | 03:01:53.475 STDOUT terraform:  <= read (data resources) 2025-05-26 03:01:53.475934 | orchestrator | 03:01:53.475 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-26 03:01:53.476089 | orchestrator | 03:01:53.475 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-26 03:01:53.476152 | orchestrator | 03:01:53.476 STDOUT terraform:  # (config refers to values not yet known) 2025-05-26 03:01:53.476229 | orchestrator | 03:01:53.476 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-26 03:01:53.476304 | orchestrator | 03:01:53.476 STDOUT terraform:  + checksum = (known after apply) 2025-05-26 03:01:53.476375 | orchestrator | 03:01:53.476 STDOUT terraform:  + created_at = (known after apply) 2025-05-26 03:01:53.476448 | orchestrator | 03:01:53.476 STDOUT terraform:  + file = (known after apply) 2025-05-26 03:01:53.476536 | orchestrator | 03:01:53.476 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.476611 | orchestrator | 03:01:53.476 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.476682 | orchestrator | 03:01:53.476 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-26 03:01:53.476757 | orchestrator | 03:01:53.476 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-26 03:01:53.476806 | orchestrator | 03:01:53.476 STDOUT terraform:  + most_recent = true 2025-05-26 03:01:53.476879 | orchestrator | 03:01:53.476 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.476949 | orchestrator | 03:01:53.476 STDOUT terraform:  + protected = (known after apply) 2025-05-26 03:01:53.477020 | orchestrator | 03:01:53.476 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.477088 | orchestrator | 03:01:53.477 STDOUT terraform:  + schema = (known after apply) 2025-05-26 03:01:53.477170 | orchestrator | 03:01:53.477 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-26 03:01:53.477242 | orchestrator | 03:01:53.477 STDOUT terraform:  + tags = (known after apply) 2025-05-26 03:01:53.477313 | orchestrator | 03:01:53.477 STDOUT terraform:  + updated_at = (known after apply) 2025-05-26 03:01:53.477346 | orchestrator | 03:01:53.477 STDOUT terraform:  } 2025-05-26 03:01:53.477469 | orchestrator | 03:01:53.477 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-26 03:01:53.477578 | orchestrator | 03:01:53.477 STDOUT terraform:  # (config refers to values not yet known) 2025-05-26 03:01:53.477664 | orchestrator | 03:01:53.477 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-26 03:01:53.477743 | orchestrator | 03:01:53.477 STDOUT terraform:  + checksum = (known after apply) 2025-05-26 03:01:53.477799 | orchestrator | 03:01:53.477 STDOUT terraform:  + created_at = (known after apply) 2025-05-26 03:01:53.477873 | orchestrator | 03:01:53.477 STDOUT terraform:  + file = (known after apply) 2025-05-26 03:01:53.477944 | orchestrator | 03:01:53.477 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.478032 | orchestrator | 03:01:53.477 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.478112 | orchestrator | 03:01:53.478 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-26 03:01:53.478198 | orchestrator | 03:01:53.478 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-26 03:01:53.478236 | orchestrator | 03:01:53.478 STDOUT terraform:  + most_recent = true 2025-05-26 03:01:53.478305 | orchestrator | 03:01:53.478 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.478393 | orchestrator | 03:01:53.478 STDOUT terraform:  + protected = (known after apply) 2025-05-26 03:01:53.478466 | orchestrator | 03:01:53.478 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.478598 | orchestrator | 03:01:53.478 STDOUT terraform:  + schema = (known after apply) 2025-05-26 03:01:53.478657 | orchestrator | 03:01:53.478 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-26 03:01:53.478725 | orchestrator | 03:01:53.478 STDOUT terraform:  + tags = (known after apply) 2025-05-26 03:01:53.478794 | orchestrator | 03:01:53.478 STDOUT terraform:  + updated_at = (known after apply) 2025-05-26 03:01:53.478838 | orchestrator | 03:01:53.478 STDOUT terraform:  } 2025-05-26 03:01:53.478909 | orchestrator | 03:01:53.478 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-26 03:01:53.479020 | orchestrator | 03:01:53.478 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-26 03:01:53.479117 | orchestrator | 03:01:53.479 STDOUT terraform:  + content = (known after apply) 2025-05-26 03:01:53.479235 | orchestrator | 03:01:53.479 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-26 03:01:53.479330 | orchestrator | 03:01:53.479 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-26 03:01:53.479415 | orchestrator | 03:01:53.479 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-26 03:01:53.479505 | orchestrator | 03:01:53.479 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-26 03:01:53.479662 | orchestrator | 03:01:53.479 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-26 03:01:53.479738 | orchestrator | 03:01:53.479 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-26 03:01:53.479799 | orchestrator | 03:01:53.479 STDOUT terraform:  + directory_permission = "0777" 2025-05-26 03:01:53.479861 | orchestrator | 03:01:53.479 STDOUT terraform:  + file_permission = "0644" 2025-05-26 03:01:53.479956 | orchestrator | 03:01:53.479 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-26 03:01:53.480048 | orchestrator | 03:01:53.479 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.480082 | orchestrator | 03:01:53.480 STDOUT terraform:  } 2025-05-26 03:01:53.480150 | orchestrator | 03:01:53.480 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-26 03:01:53.480212 | orchestrator | 03:01:53.480 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-26 03:01:53.480306 | orchestrator | 03:01:53.480 STDOUT terraform:  + content = (known after apply) 2025-05-26 03:01:53.480394 | orchestrator | 03:01:53.480 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-26 03:01:53.480483 | orchestrator | 03:01:53.480 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-26 03:01:53.480606 | orchestrator | 03:01:53.480 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-26 03:01:53.480693 | orchestrator | 03:01:53.480 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-26 03:01:53.480781 | orchestrator | 03:01:53.480 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-26 03:01:53.480894 | orchestrator | 03:01:53.480 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-26 03:01:53.480955 | orchestrator | 03:01:53.480 STDOUT terraform:  + directory_permission = "0777" 2025-05-26 03:01:53.481015 | orchestrator | 03:01:53.480 STDOUT terraform:  + file_permission = "0644" 2025-05-26 03:01:53.481094 | orchestrator | 03:01:53.481 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-26 03:01:53.481197 | orchestrator | 03:01:53.481 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.481226 | orchestrator | 03:01:53.481 STDOUT terraform:  } 2025-05-26 03:01:53.481285 | orchestrator | 03:01:53.481 STDOUT terraform:  # local_file.inventory will be created 2025-05-26 03:01:53.481344 | orchestrator | 03:01:53.481 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-26 03:01:53.481435 | orchestrator | 03:01:53.481 STDOUT terraform:  + content = (known after apply) 2025-05-26 03:01:53.481542 | orchestrator | 03:01:53.481 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-26 03:01:53.481622 | orchestrator | 03:01:53.481 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-26 03:01:53.481710 | orchestrator | 03:01:53.481 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-26 03:01:53.481805 | orchestrator | 03:01:53.481 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-26 03:01:53.481892 | orchestrator | 03:01:53.481 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-26 03:01:53.481978 | orchestrator | 03:01:53.481 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-26 03:01:53.482061 | orchestrator | 03:01:53.481 STDOUT terraform:  + directory_permission = "0777" 2025-05-26 03:01:53.482121 | orchestrator | 03:01:53.482 STDOUT terraform:  + file_permission = "0644" 2025-05-26 03:01:53.482198 | orchestrator | 03:01:53.482 STDOUT terraform:  + filename = "inventory.ci" 2025-05-26 03:01:53.482291 | orchestrator | 03:01:53.482 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.482323 | orchestrator | 03:01:53.482 STDOUT terraform:  } 2025-05-26 03:01:53.482576 | orchestrator | 03:01:53.482 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-26 03:01:53.482653 | orchestrator | 03:01:53.482 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-26 03:01:53.482736 | orchestrator | 03:01:53.482 STDOUT terraform:  + content = (sensitive value) 2025-05-26 03:01:53.482824 | orchestrator | 03:01:53.482 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-26 03:01:53.482904 | orchestrator | 03:01:53.482 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-26 03:01:53.482977 | orchestrator | 03:01:53.482 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-26 03:01:53.483050 | orchestrator | 03:01:53.482 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-26 03:01:53.483123 | orchestrator | 03:01:53.483 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-26 03:01:53.483198 | orchestrator | 03:01:53.483 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-26 03:01:53.483249 | orchestrator | 03:01:53.483 STDOUT terraform:  + directory_permission = "0700" 2025-05-26 03:01:53.483299 | orchestrator | 03:01:53.483 STDOUT terraform:  + file_permission = "0600" 2025-05-26 03:01:53.483360 | orchestrator | 03:01:53.483 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-26 03:01:53.483436 | orchestrator | 03:01:53.483 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.483463 | orchestrator | 03:01:53.483 STDOUT terraform:  } 2025-05-26 03:01:53.483544 | orchestrator | 03:01:53.483 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-26 03:01:53.483607 | orchestrator | 03:01:53.483 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-26 03:01:53.483649 | orchestrator | 03:01:53.483 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.483678 | orchestrator | 03:01:53.483 STDOUT terraform:  } 2025-05-26 03:01:53.483780 | orchestrator | 03:01:53.483 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-26 03:01:53.483878 | orchestrator | 03:01:53.483 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-26 03:01:53.483952 | orchestrator | 03:01:53.483 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.484002 | orchestrator | 03:01:53.483 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.484081 | orchestrator | 03:01:53.484 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.484154 | orchestrator | 03:01:53.484 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.484228 | orchestrator | 03:01:53.484 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.484321 | orchestrator | 03:01:53.484 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-26 03:01:53.484400 | orchestrator | 03:01:53.484 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.484440 | orchestrator | 03:01:53.484 STDOUT terraform:  + size = 80 2025-05-26 03:01:53.484490 | orchestrator | 03:01:53.484 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.484567 | orchestrator | 03:01:53.484 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.484580 | orchestrator | 03:01:53.484 STDOUT terraform:  } 2025-05-26 03:01:53.484680 | orchestrator | 03:01:53.484 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-26 03:01:53.484777 | orchestrator | 03:01:53.484 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-26 03:01:53.484851 | orchestrator | 03:01:53.484 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.484900 | orchestrator | 03:01:53.484 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.484977 | orchestrator | 03:01:53.484 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.485053 | orchestrator | 03:01:53.484 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.485127 | orchestrator | 03:01:53.485 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.485225 | orchestrator | 03:01:53.485 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-26 03:01:53.485304 | orchestrator | 03:01:53.485 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.485347 | orchestrator | 03:01:53.485 STDOUT terraform:  + size = 80 2025-05-26 03:01:53.485398 | orchestrator | 03:01:53.485 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.485448 | orchestrator | 03:01:53.485 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.485474 | orchestrator | 03:01:53.485 STDOUT terraform:  } 2025-05-26 03:01:53.485608 | orchestrator | 03:01:53.485 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-26 03:01:53.485707 | orchestrator | 03:01:53.485 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-26 03:01:53.485785 | orchestrator | 03:01:53.485 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.485832 | orchestrator | 03:01:53.485 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.485906 | orchestrator | 03:01:53.485 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.485970 | orchestrator | 03:01:53.485 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.486053 | orchestrator | 03:01:53.485 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.486133 | orchestrator | 03:01:53.486 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-26 03:01:53.486198 | orchestrator | 03:01:53.486 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.486234 | orchestrator | 03:01:53.486 STDOUT terraform:  + size = 80 2025-05-26 03:01:53.486277 | orchestrator | 03:01:53.486 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.486320 | orchestrator | 03:01:53.486 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.486344 | orchestrator | 03:01:53.486 STDOUT terraform:  } 2025-05-26 03:01:53.486427 | orchestrator | 03:01:53.486 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-26 03:01:53.486517 | orchestrator | 03:01:53.486 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-26 03:01:53.486585 | orchestrator | 03:01:53.486 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.486628 | orchestrator | 03:01:53.486 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.486693 | orchestrator | 03:01:53.486 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.486757 | orchestrator | 03:01:53.486 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.486824 | orchestrator | 03:01:53.486 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.486903 | orchestrator | 03:01:53.486 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-26 03:01:53.486966 | orchestrator | 03:01:53.486 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.487004 | orchestrator | 03:01:53.486 STDOUT terraform:  + size = 80 2025-05-26 03:01:53.487046 | orchestrator | 03:01:53.487 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.487089 | orchestrator | 03:01:53.487 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.487112 | orchestrator | 03:01:53.487 STDOUT terraform:  } 2025-05-26 03:01:53.487194 | orchestrator | 03:01:53.487 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-26 03:01:53.487280 | orchestrator | 03:01:53.487 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-26 03:01:53.487344 | orchestrator | 03:01:53.487 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.487386 | orchestrator | 03:01:53.487 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.487451 | orchestrator | 03:01:53.487 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.487553 | orchestrator | 03:01:53.487 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.487624 | orchestrator | 03:01:53.487 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.487701 | orchestrator | 03:01:53.487 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-26 03:01:53.487761 | orchestrator | 03:01:53.487 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.487795 | orchestrator | 03:01:53.487 STDOUT terraform:  + size = 80 2025-05-26 03:01:53.487835 | orchestrator | 03:01:53.487 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.487875 | orchestrator | 03:01:53.487 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.487896 | orchestrator | 03:01:53.487 STDOUT terraform:  } 2025-05-26 03:01:53.487974 | orchestrator | 03:01:53.487 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-26 03:01:53.488048 | orchestrator | 03:01:53.487 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-26 03:01:53.488106 | orchestrator | 03:01:53.488 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.488147 | orchestrator | 03:01:53.488 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.488208 | orchestrator | 03:01:53.488 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.488267 | orchestrator | 03:01:53.488 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.488326 | orchestrator | 03:01:53.488 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.488402 | orchestrator | 03:01:53.488 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-26 03:01:53.488463 | orchestrator | 03:01:53.488 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.488500 | orchestrator | 03:01:53.488 STDOUT terraform:  + size = 80 2025-05-26 03:01:53.488588 | orchestrator | 03:01:53.488 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.488624 | orchestrator | 03:01:53.488 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.488645 | orchestrator | 03:01:53.488 STDOUT terraform:  } 2025-05-26 03:01:53.488722 | orchestrator | 03:01:53.488 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-26 03:01:53.488799 | orchestrator | 03:01:53.488 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-26 03:01:53.488862 | orchestrator | 03:01:53.488 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.488899 | orchestrator | 03:01:53.488 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.488962 | orchestrator | 03:01:53.488 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.489021 | orchestrator | 03:01:53.488 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.489082 | orchestrator | 03:01:53.489 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.489157 | orchestrator | 03:01:53.489 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-26 03:01:53.489216 | orchestrator | 03:01:53.489 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.489262 | orchestrator | 03:01:53.489 STDOUT terraform:  + size = 80 2025-05-26 03:01:53.489302 | orchestrator | 03:01:53.489 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.489343 | orchestrator | 03:01:53.489 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.489364 | orchestrator | 03:01:53.489 STDOUT terraform:  } 2025-05-26 03:01:53.489461 | orchestrator | 03:01:53.489 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-26 03:01:53.489548 | orchestrator | 03:01:53.489 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.489607 | orchestrator | 03:01:53.489 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.489646 | orchestrator | 03:01:53.489 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.489709 | orchestrator | 03:01:53.489 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.489767 | orchestrator | 03:01:53.489 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.489832 | orchestrator | 03:01:53.489 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-26 03:01:53.489891 | orchestrator | 03:01:53.489 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.489924 | orchestrator | 03:01:53.489 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.489964 | orchestrator | 03:01:53.489 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.490004 | orchestrator | 03:01:53.489 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.490044 | orchestrator | 03:01:53.490 STDOUT terraform:  } 2025-05-26 03:01:53.490121 | orchestrator | 03:01:53.490 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-26 03:01:53.490200 | orchestrator | 03:01:53.490 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.490249 | orchestrator | 03:01:53.490 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.490287 | orchestrator | 03:01:53.490 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.490375 | orchestrator | 03:01:53.490 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.490438 | orchestrator | 03:01:53.490 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.490501 | orchestrator | 03:01:53.490 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-26 03:01:53.490595 | orchestrator | 03:01:53.490 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.490628 | orchestrator | 03:01:53.490 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.490670 | orchestrator | 03:01:53.490 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.490706 | orchestrator | 03:01:53.490 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.490724 | orchestrator | 03:01:53.490 STDOUT terraform:  } 2025-05-26 03:01:53.490790 | orchestrator | 03:01:53.490 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-26 03:01:53.490854 | orchestrator | 03:01:53.490 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.490907 | orchestrator | 03:01:53.490 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.490942 | orchestrator | 03:01:53.490 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.490995 | orchestrator | 03:01:53.490 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.491047 | orchestrator | 03:01:53.490 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.491105 | orchestrator | 03:01:53.491 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-26 03:01:53.491156 | orchestrator | 03:01:53.491 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.491187 | orchestrator | 03:01:53.491 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.491222 | orchestrator | 03:01:53.491 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.491257 | orchestrator | 03:01:53.491 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.491275 | orchestrator | 03:01:53.491 STDOUT terraform:  } 2025-05-26 03:01:53.491340 | orchestrator | 03:01:53.491 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-26 03:01:53.491402 | orchestrator | 03:01:53.491 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.491454 | orchestrator | 03:01:53.491 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.491488 | orchestrator | 03:01:53.491 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.491559 | orchestrator | 03:01:53.491 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.491611 | orchestrator | 03:01:53.491 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.491669 | orchestrator | 03:01:53.491 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-26 03:01:53.491723 | orchestrator | 03:01:53.491 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.491753 | orchestrator | 03:01:53.491 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.491794 | orchestrator | 03:01:53.491 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.491822 | orchestrator | 03:01:53.491 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.491841 | orchestrator | 03:01:53.491 STDOUT terraform:  } 2025-05-26 03:01:53.491906 | orchestrator | 03:01:53.491 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-26 03:01:53.491968 | orchestrator | 03:01:53.491 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.492020 | orchestrator | 03:01:53.491 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.492055 | orchestrator | 03:01:53.492 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.492109 | orchestrator | 03:01:53.492 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.492160 | orchestrator | 03:01:53.492 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.492217 | orchestrator | 03:01:53.492 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-26 03:01:53.492272 | orchestrator | 03:01:53.492 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.492303 | orchestrator | 03:01:53.492 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.492337 | orchestrator | 03:01:53.492 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.492373 | orchestrator | 03:01:53.492 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.492387 | orchestrator | 03:01:53.492 STDOUT terraform:  } 2025-05-26 03:01:53.492452 | orchestrator | 03:01:53.492 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-26 03:01:53.492535 | orchestrator | 03:01:53.492 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.492577 | orchestrator | 03:01:53.492 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.492614 | orchestrator | 03:01:53.492 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.492667 | orchestrator | 03:01:53.492 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.492719 | orchestrator | 03:01:53.492 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.492775 | orchestrator | 03:01:53.492 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-26 03:01:53.492827 | orchestrator | 03:01:53.492 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.492858 | orchestrator | 03:01:53.492 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.492893 | orchestrator | 03:01:53.492 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.492927 | orchestrator | 03:01:53.492 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.492946 | orchestrator | 03:01:53.492 STDOUT terraform:  } 2025-05-26 03:01:53.493013 | orchestrator | 03:01:53.492 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-26 03:01:53.493077 | orchestrator | 03:01:53.493 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.493129 | orchestrator | 03:01:53.493 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.493164 | orchestrator | 03:01:53.493 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.493215 | orchestrator | 03:01:53.493 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.493268 | orchestrator | 03:01:53.493 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.493323 | orchestrator | 03:01:53.493 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-26 03:01:53.493375 | orchestrator | 03:01:53.493 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.493406 | orchestrator | 03:01:53.493 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.493447 | orchestrator | 03:01:53.493 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.493476 | orchestrator | 03:01:53.493 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.493495 | orchestrator | 03:01:53.493 STDOUT terraform:  } 2025-05-26 03:01:53.493577 | orchestrator | 03:01:53.493 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-26 03:01:53.493642 | orchestrator | 03:01:53.493 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.493692 | orchestrator | 03:01:53.493 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.493726 | orchestrator | 03:01:53.493 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.493779 | orchestrator | 03:01:53.493 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.493830 | orchestrator | 03:01:53.493 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.493887 | orchestrator | 03:01:53.493 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-26 03:01:53.493940 | orchestrator | 03:01:53.493 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.493970 | orchestrator | 03:01:53.493 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.494004 | orchestrator | 03:01:53.493 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.494054 | orchestrator | 03:01:53.494 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.494072 | orchestrator | 03:01:53.494 STDOUT terraform:  } 2025-05-26 03:01:53.494141 | orchestrator | 03:01:53.494 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-26 03:01:53.494202 | orchestrator | 03:01:53.494 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-26 03:01:53.494254 | orchestrator | 03:01:53.494 STDOUT terraform:  + attachment = (known after apply) 2025-05-26 03:01:53.494289 | orchestrator | 03:01:53.494 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.494342 | orchestrator | 03:01:53.494 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.494394 | orchestrator | 03:01:53.494 STDOUT terraform:  + metadata = (known after apply) 2025-05-26 03:01:53.494449 | orchestrator | 03:01:53.494 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-26 03:01:53.494502 | orchestrator | 03:01:53.494 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.494561 | orchestrator | 03:01:53.494 STDOUT terraform:  + size = 20 2025-05-26 03:01:53.494597 | orchestrator | 03:01:53.494 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-26 03:01:53.494632 | orchestrator | 03:01:53.494 STDOUT terraform:  + volume_type = "ssd" 2025-05-26 03:01:53.494650 | orchestrator | 03:01:53.494 STDOUT terraform:  } 2025-05-26 03:01:53.494712 | orchestrator | 03:01:53.494 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-26 03:01:53.494766 | orchestrator | 03:01:53.494 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-26 03:01:53.494812 | orchestrator | 03:01:53.494 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-26 03:01:53.494858 | orchestrator | 03:01:53.494 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-26 03:01:53.494903 | orchestrator | 03:01:53.494 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-26 03:01:53.494948 | orchestrator | 03:01:53.494 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.494979 | orchestrator | 03:01:53.494 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.495006 | orchestrator | 03:01:53.494 STDOUT terraform:  + config_drive = true 2025-05-26 03:01:53.495051 | orchestrator | 03:01:53.495 STDOUT terraform:  + created = (known after apply) 2025-05-26 03:01:53.495098 | orchestrator | 03:01:53.495 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-26 03:01:53.495136 | orchestrator | 03:01:53.495 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-26 03:01:53.495168 | orchestrator | 03:01:53.495 STDOUT terraform:  + force_delete = false 2025-05-26 03:01:53.495211 | orchestrator | 03:01:53.495 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-26 03:01:53.495260 | orchestrator | 03:01:53.495 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.495311 | orchestrator | 03:01:53.495 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.495350 | orchestrator | 03:01:53.495 STDOUT terraform:  + image_name = (known after apply) 2025-05-26 03:01:53.495382 | orchestrator | 03:01:53.495 STDOUT terraform:  + key_pair = "testbed" 2025-05-26 03:01:53.495424 | orchestrator | 03:01:53.495 STDOUT terraform:  + name = "testbed-manager" 2025-05-26 03:01:53.495456 | orchestrator | 03:01:53.495 STDOUT terraform:  + power_state = "active" 2025-05-26 03:01:53.495503 | orchestrator | 03:01:53.495 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.495576 | orchestrator | 03:01:53.495 STDOUT terraform:  + security_groups = (known after apply) 2025-05-26 03:01:53.495607 | orchestrator | 03:01:53.495 STDOUT terraform:  + stop_before_destroy = false 2025-05-26 03:01:53.495653 | orchestrator | 03:01:53.495 STDOUT terraform:  + updated = (known after apply) 2025-05-26 03:01:53.495698 | orchestrator | 03:01:53.495 STDOUT terraform:  + user_data = (known after apply) 2025-05-26 03:01:53.495720 | orchestrator | 03:01:53.495 STDOUT terraform:  + block_device { 2025-05-26 03:01:53.495751 | orchestrator | 03:01:53.495 STDOUT terraform:  + boot_index = 0 2025-05-26 03:01:53.495788 | orchestrator | 03:01:53.495 STDOUT terraform:  + delete_on_termination = false 2025-05-26 03:01:53.495827 | orchestrator | 03:01:53.495 STDOUT terraform:  + destination_type = "volume" 2025-05-26 03:01:53.495865 | orchestrator | 03:01:53.495 STDOUT terraform:  + multiattach = false 2025-05-26 03:01:53.495904 | orchestrator | 03:01:53.495 STDOUT terraform:  + source_type = "volume" 2025-05-26 03:01:53.495956 | orchestrator | 03:01:53.495 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.495974 | orchestrator | 03:01:53.495 STDOUT terraform:  } 2025-05-26 03:01:53.495993 | orchestrator | 03:01:53.495 STDOUT terraform:  + network { 2025-05-26 03:01:53.496020 | orchestrator | 03:01:53.495 STDOUT terraform:  + access_network = false 2025-05-26 03:01:53.496062 | orchestrator | 03:01:53.496 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-26 03:01:53.496102 | orchestrator | 03:01:53.496 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-26 03:01:53.496143 | orchestrator | 03:01:53.496 STDOUT terraform:  + mac = (known after apply) 2025-05-26 03:01:53.496184 | orchestrator | 03:01:53.496 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.496225 | orchestrator | 03:01:53.496 STDOUT terraform:  + port = (known after apply) 2025-05-26 03:01:53.496270 | orchestrator | 03:01:53.496 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.496283 | orchestrator | 03:01:53.496 STDOUT terraform:  } 2025-05-26 03:01:53.496296 | orchestrator | 03:01:53.496 STDOUT terraform:  } 2025-05-26 03:01:53.496354 | orchestrator | 03:01:53.496 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-26 03:01:53.496408 | orchestrator | 03:01:53.496 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-26 03:01:53.496454 | orchestrator | 03:01:53.496 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-26 03:01:53.496498 | orchestrator | 03:01:53.496 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-26 03:01:53.496561 | orchestrator | 03:01:53.496 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-26 03:01:53.496609 | orchestrator | 03:01:53.496 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.496638 | orchestrator | 03:01:53.496 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.496665 | orchestrator | 03:01:53.496 STDOUT terraform:  + config_drive = true 2025-05-26 03:01:53.496713 | orchestrator | 03:01:53.496 STDOUT terraform:  + created = (known after apply) 2025-05-26 03:01:53.496757 | orchestrator | 03:01:53.496 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-26 03:01:53.496795 | orchestrator | 03:01:53.496 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-26 03:01:53.496826 | orchestrator | 03:01:53.496 STDOUT terraform:  + force_delete = false 2025-05-26 03:01:53.496870 | orchestrator | 03:01:53.496 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-26 03:01:53.496916 | orchestrator | 03:01:53.496 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.496963 | orchestrator | 03:01:53.496 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.497007 | orchestrator | 03:01:53.496 STDOUT terraform:  + image_name = (known after apply) 2025-05-26 03:01:53.497041 | orchestrator | 03:01:53.497 STDOUT terraform:  + key_pair = "testbed" 2025-05-26 03:01:53.497079 | orchestrator | 03:01:53.497 STDOUT terraform:  + name = "testbed-node-0" 2025-05-26 03:01:53.497112 | orchestrator | 03:01:53.497 STDOUT terraform:  + power_state = "active" 2025-05-26 03:01:53.497169 | orchestrator | 03:01:53.497 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.497204 | orchestrator | 03:01:53.497 STDOUT terraform:  + security_groups = (known after apply) 2025-05-26 03:01:53.497234 | orchestrator | 03:01:53.497 STDOUT terraform:  + stop_before_destroy = false 2025-05-26 03:01:53.497280 | orchestrator | 03:01:53.497 STDOUT terraform:  + updated = (known after apply) 2025-05-26 03:01:53.497344 | orchestrator | 03:01:53.497 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-26 03:01:53.497365 | orchestrator | 03:01:53.497 STDOUT terraform:  + block_device { 2025-05-26 03:01:53.497397 | orchestrator | 03:01:53.497 STDOUT terraform:  + boot_index = 0 2025-05-26 03:01:53.497434 | orchestrator | 03:01:53.497 STDOUT terraform:  + delete_on_termination = false 2025-05-26 03:01:53.497472 | orchestrator | 03:01:53.497 STDOUT terraform:  + destination_type = "volume" 2025-05-26 03:01:53.497541 | orchestrator | 03:01:53.497 STDOUT terraform:  + multiattach = false 2025-05-26 03:01:53.497576 | orchestrator | 03:01:53.497 STDOUT terraform:  + source_type = "volume" 2025-05-26 03:01:53.497628 | orchestrator | 03:01:53.497 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.497645 | orchestrator | 03:01:53.497 STDOUT terraform:  } 2025-05-26 03:01:53.497663 | orchestrator | 03:01:53.497 STDOUT terraform:  + network { 2025-05-26 03:01:53.497689 | orchestrator | 03:01:53.497 STDOUT terraform:  + access_network = false 2025-05-26 03:01:53.497729 | orchestrator | 03:01:53.497 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-26 03:01:53.497769 | orchestrator | 03:01:53.497 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-26 03:01:53.497808 | orchestrator | 03:01:53.497 STDOUT terraform:  + mac = (known after apply) 2025-05-26 03:01:53.497848 | orchestrator | 03:01:53.497 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.497888 | orchestrator | 03:01:53.497 STDOUT terraform:  + port = (known after apply) 2025-05-26 03:01:53.497927 | orchestrator | 03:01:53.497 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.497945 | orchestrator | 03:01:53.497 STDOUT terraform:  } 2025-05-26 03:01:53.497961 | orchestrator | 03:01:53.497 STDOUT terraform:  } 2025-05-26 03:01:53.498038 | orchestrator | 03:01:53.497 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-26 03:01:53.498080 | orchestrator | 03:01:53.498 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-26 03:01:53.498125 | orchestrator | 03:01:53.498 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-26 03:01:53.498168 | orchestrator | 03:01:53.498 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-26 03:01:53.498210 | orchestrator | 03:01:53.498 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-26 03:01:53.498255 | orchestrator | 03:01:53.498 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.498283 | orchestrator | 03:01:53.498 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.498309 | orchestrator | 03:01:53.498 STDOUT terraform:  + config_drive = true 2025-05-26 03:01:53.498352 | orchestrator | 03:01:53.498 STDOUT terraform:  + created = (known after apply) 2025-05-26 03:01:53.498418 | orchestrator | 03:01:53.498 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-26 03:01:53.498476 | orchestrator | 03:01:53.498 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-26 03:01:53.498527 | orchestrator | 03:01:53.498 STDOUT terraform:  + force_delete = false 2025-05-26 03:01:53.498570 | orchestrator | 03:01:53.498 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-26 03:01:53.498615 | orchestrator | 03:01:53.498 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.498660 | orchestrator | 03:01:53.498 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.498705 | orchestrator | 03:01:53.498 STDOUT terraform:  + image_name = (known after apply) 2025-05-26 03:01:53.498737 | orchestrator | 03:01:53.498 STDOUT terraform:  + key_pair = "testbed" 2025-05-26 03:01:53.498777 | orchestrator | 03:01:53.498 STDOUT terraform:  + name = "testbed-node-1" 2025-05-26 03:01:53.498807 | orchestrator | 03:01:53.498 STDOUT terraform:  + power_state = "active" 2025-05-26 03:01:53.498851 | orchestrator | 03:01:53.498 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.498892 | orchestrator | 03:01:53.498 STDOUT terraform:  + security_groups = (known after apply) 2025-05-26 03:01:53.498921 | orchestrator | 03:01:53.498 STDOUT terraform:  + stop_before_destroy = false 2025-05-26 03:01:53.498963 | orchestrator | 03:01:53.498 STDOUT terraform:  + updated = (known after apply) 2025-05-26 03:01:53.499024 | orchestrator | 03:01:53.498 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-26 03:01:53.499044 | orchestrator | 03:01:53.499 STDOUT terraform:  + block_device { 2025-05-26 03:01:53.499076 | orchestrator | 03:01:53.499 STDOUT terraform:  + boot_index = 0 2025-05-26 03:01:53.499124 | orchestrator | 03:01:53.499 STDOUT terraform:  + delete_on_termination = false 2025-05-26 03:01:53.499184 | orchestrator | 03:01:53.499 STDOUT terraform:  + destination_type = "volume" 2025-05-26 03:01:53.499222 | orchestrator | 03:01:53.499 STDOUT terraform:  + multiattach = false 2025-05-26 03:01:53.499264 | orchestrator | 03:01:53.499 STDOUT terraform:  + source_type = "volume" 2025-05-26 03:01:53.499308 | orchestrator | 03:01:53.499 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.499326 | orchestrator | 03:01:53.499 STDOUT terraform:  } 2025-05-26 03:01:53.499343 | orchestrator | 03:01:53.499 STDOUT terraform:  + network { 2025-05-26 03:01:53.499369 | orchestrator | 03:01:53.499 STDOUT terraform:  + access_network = false 2025-05-26 03:01:53.499409 | orchestrator | 03:01:53.499 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-26 03:01:53.499459 | orchestrator | 03:01:53.499 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-26 03:01:53.499487 | orchestrator | 03:01:53.499 STDOUT terraform:  + mac = (known after apply) 2025-05-26 03:01:53.499556 | orchestrator | 03:01:53.499 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.499595 | orchestrator | 03:01:53.499 STDOUT terraform:  + port = (known after apply) 2025-05-26 03:01:53.499633 | orchestrator | 03:01:53.499 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.499649 | orchestrator | 03:01:53.499 STDOUT terraform:  } 2025-05-26 03:01:53.499665 | orchestrator | 03:01:53.499 STDOUT terraform:  } 2025-05-26 03:01:53.499713 | orchestrator | 03:01:53.499 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-26 03:01:53.499761 | orchestrator | 03:01:53.499 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-26 03:01:53.499802 | orchestrator | 03:01:53.499 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-26 03:01:53.499841 | orchestrator | 03:01:53.499 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-26 03:01:53.499882 | orchestrator | 03:01:53.499 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-26 03:01:53.499921 | orchestrator | 03:01:53.499 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.499948 | orchestrator | 03:01:53.499 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.499972 | orchestrator | 03:01:53.499 STDOUT terraform:  + config_drive = true 2025-05-26 03:01:53.500012 | orchestrator | 03:01:53.499 STDOUT terraform:  + created = (known after apply) 2025-05-26 03:01:53.500067 | orchestrator | 03:01:53.500 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-26 03:01:53.500105 | orchestrator | 03:01:53.500 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-26 03:01:53.500131 | orchestrator | 03:01:53.500 STDOUT terraform:  + force_delete = false 2025-05-26 03:01:53.500170 | orchestrator | 03:01:53.500 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-26 03:01:53.500210 | orchestrator | 03:01:53.500 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.500250 | orchestrator | 03:01:53.500 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.500290 | orchestrator | 03:01:53.500 STDOUT terraform:  + image_name = (known after apply) 2025-05-26 03:01:53.500318 | orchestrator | 03:01:53.500 STDOUT terraform:  + key_pair = "testbed" 2025-05-26 03:01:53.500353 | orchestrator | 03:01:53.500 STDOUT terraform:  + name = "testbed-node-2" 2025-05-26 03:01:53.500381 | orchestrator | 03:01:53.500 STDOUT terraform:  + power_state = "active" 2025-05-26 03:01:53.500435 | orchestrator | 03:01:53.500 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.500462 | orchestrator | 03:01:53.500 STDOUT terraform:  + security_groups = (known after apply) 2025-05-26 03:01:53.500487 | orchestrator | 03:01:53.500 STDOUT terraform:  + stop_before_destroy = false 2025-05-26 03:01:53.500568 | orchestrator | 03:01:53.500 STDOUT terraform:  + updated = (known after apply) 2025-05-26 03:01:53.500624 | orchestrator | 03:01:53.500 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-26 03:01:53.500644 | orchestrator | 03:01:53.500 STDOUT terraform:  + block_device { 2025-05-26 03:01:53.500672 | orchestrator | 03:01:53.500 STDOUT terraform:  + boot_index = 0 2025-05-26 03:01:53.500704 | orchestrator | 03:01:53.500 STDOUT terraform:  + delete_on_termination = false 2025-05-26 03:01:53.500737 | orchestrator | 03:01:53.500 STDOUT terraform:  + destination_type = "volume" 2025-05-26 03:01:53.500769 | orchestrator | 03:01:53.500 STDOUT terraform:  + multiattach = false 2025-05-26 03:01:53.500802 | orchestrator | 03:01:53.500 STDOUT terraform:  + source_type = "volume" 2025-05-26 03:01:53.500846 | orchestrator | 03:01:53.500 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.500861 | orchestrator | 03:01:53.500 STDOUT terraform:  } 2025-05-26 03:01:53.500879 | orchestrator | 03:01:53.500 STDOUT terraform:  + network { 2025-05-26 03:01:53.500902 | orchestrator | 03:01:53.500 STDOUT terraform:  + access_network = false 2025-05-26 03:01:53.500937 | orchestrator | 03:01:53.500 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-26 03:01:53.500971 | orchestrator | 03:01:53.500 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-26 03:01:53.501006 | orchestrator | 03:01:53.500 STDOUT terraform:  + mac = (known after apply) 2025-05-26 03:01:53.501041 | orchestrator | 03:01:53.501 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.501077 | orchestrator | 03:01:53.501 STDOUT terraform:  + port = (known after apply) 2025-05-26 03:01:53.501113 | orchestrator | 03:01:53.501 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.501129 | orchestrator | 03:01:53.501 STDOUT terraform:  } 2025-05-26 03:01:53.501147 | orchestrator | 03:01:53.501 STDOUT terraform:  } 2025-05-26 03:01:53.501221 | orchestrator | 03:01:53.501 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-26 03:01:53.501283 | orchestrator | 03:01:53.501 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-26 03:01:53.501322 | orchestrator | 03:01:53.501 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-26 03:01:53.501363 | orchestrator | 03:01:53.501 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-26 03:01:53.501408 | orchestrator | 03:01:53.501 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-26 03:01:53.501443 | orchestrator | 03:01:53.501 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.501470 | orchestrator | 03:01:53.501 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.501494 | orchestrator | 03:01:53.501 STDOUT terraform:  + config_drive = true 2025-05-26 03:01:53.501549 | orchestrator | 03:01:53.501 STDOUT terraform:  + created = (known after apply) 2025-05-26 03:01:53.501589 | orchestrator | 03:01:53.501 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-26 03:01:53.501622 | orchestrator | 03:01:53.501 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-26 03:01:53.501649 | orchestrator | 03:01:53.501 STDOUT terraform:  + force_delete = false 2025-05-26 03:01:53.501687 | orchestrator | 03:01:53.501 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-26 03:01:53.501727 | orchestrator | 03:01:53.501 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.501766 | orchestrator | 03:01:53.501 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.501805 | orchestrator | 03:01:53.501 STDOUT terraform:  + image_name = (known after apply) 2025-05-26 03:01:53.501834 | orchestrator | 03:01:53.501 STDOUT terraform:  + key_pair = "testbed" 2025-05-26 03:01:53.501884 | orchestrator | 03:01:53.501 STDOUT terraform:  + name = "testbed-node-3" 2025-05-26 03:01:53.501912 | orchestrator | 03:01:53.501 STDOUT terraform:  + power_state = "active" 2025-05-26 03:01:53.501951 | orchestrator | 03:01:53.501 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.501989 | orchestrator | 03:01:53.501 STDOUT terraform:  + security_groups = (known after apply) 2025-05-26 03:01:53.502032 | orchestrator | 03:01:53.501 STDOUT terraform:  + stop_before_destroy = false 2025-05-26 03:01:53.502095 | orchestrator | 03:01:53.502 STDOUT terraform:  + updated = (known after apply) 2025-05-26 03:01:53.502154 | orchestrator | 03:01:53.502 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-26 03:01:53.502160 | orchestrator | 03:01:53.502 STDOUT terraform:  + block_device { 2025-05-26 03:01:53.502194 | orchestrator | 03:01:53.502 STDOUT terraform:  + boot_index = 0 2025-05-26 03:01:53.502226 | orchestrator | 03:01:53.502 STDOUT terraform:  + delete_on_termination = false 2025-05-26 03:01:53.502258 | orchestrator | 03:01:53.502 STDOUT terraform:  + destination_type = "volume" 2025-05-26 03:01:53.502304 | orchestrator | 03:01:53.502 STDOUT terraform:  + multiattach = false 2025-05-26 03:01:53.502348 | orchestrator | 03:01:53.502 STDOUT terraform:  + source_type = "volume" 2025-05-26 03:01:53.502392 | orchestrator | 03:01:53.502 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.502398 | orchestrator | 03:01:53.502 STDOUT terraform:  } 2025-05-26 03:01:53.502419 | orchestrator | 03:01:53.502 STDOUT terraform:  + network { 2025-05-26 03:01:53.502447 | orchestrator | 03:01:53.502 STDOUT terraform:  + access_network = false 2025-05-26 03:01:53.502482 | orchestrator | 03:01:53.502 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-26 03:01:53.502537 | orchestrator | 03:01:53.502 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-26 03:01:53.502581 | orchestrator | 03:01:53.502 STDOUT terraform:  + mac = (known after apply) 2025-05-26 03:01:53.502616 | orchestrator | 03:01:53.502 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.502652 | orchestrator | 03:01:53.502 STDOUT terraform:  + port = (known after apply) 2025-05-26 03:01:53.502688 | orchestrator | 03:01:53.502 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.502695 | orchestrator | 03:01:53.502 STDOUT terraform:  } 2025-05-26 03:01:53.502715 | orchestrator | 03:01:53.502 STDOUT terraform:  } 2025-05-26 03:01:53.502764 | orchestrator | 03:01:53.502 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-26 03:01:53.502811 | orchestrator | 03:01:53.502 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-26 03:01:53.502851 | orchestrator | 03:01:53.502 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-26 03:01:53.502890 | orchestrator | 03:01:53.502 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-26 03:01:53.502930 | orchestrator | 03:01:53.502 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-26 03:01:53.502970 | orchestrator | 03:01:53.502 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.502998 | orchestrator | 03:01:53.502 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.503024 | orchestrator | 03:01:53.502 STDOUT terraform:  + config_drive = true 2025-05-26 03:01:53.503064 | orchestrator | 03:01:53.503 STDOUT terraform:  + created = (known after apply) 2025-05-26 03:01:53.503094 | orchestrator | 03:01:53.503 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-26 03:01:53.503124 | orchestrator | 03:01:53.503 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-26 03:01:53.503148 | orchestrator | 03:01:53.503 STDOUT terraform:  + force_delete = false 2025-05-26 03:01:53.503185 | orchestrator | 03:01:53.503 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-26 03:01:53.503222 | orchestrator | 03:01:53.503 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.503259 | orchestrator | 03:01:53.503 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.503298 | orchestrator | 03:01:53.503 STDOUT terraform:  + image_name = (known after apply) 2025-05-26 03:01:53.503323 | orchestrator | 03:01:53.503 STDOUT terraform:  + key_pair = "testbed" 2025-05-26 03:01:53.503363 | orchestrator | 03:01:53.503 STDOUT terraform:  + name = "testbed-node-4" 2025-05-26 03:01:53.503402 | orchestrator | 03:01:53.503 STDOUT terraform:  + power_state = "active" 2025-05-26 03:01:53.503452 | orchestrator | 03:01:53.503 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.503489 | orchestrator | 03:01:53.503 STDOUT terraform:  + security_groups = (known after apply) 2025-05-26 03:01:53.503541 | orchestrator | 03:01:53.503 STDOUT terraform:  + stop_before_destroy = false 2025-05-26 03:01:53.503560 | orchestrator | 03:01:53.503 STDOUT terraform:  + updated = (known after apply) 2025-05-26 03:01:53.503611 | orchestrator | 03:01:53.503 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-26 03:01:53.503628 | orchestrator | 03:01:53.503 STDOUT terraform:  + block_device { 2025-05-26 03:01:53.503651 | orchestrator | 03:01:53.503 STDOUT terraform:  + boot_index = 0 2025-05-26 03:01:53.503681 | orchestrator | 03:01:53.503 STDOUT terraform:  + delete_on_termination = false 2025-05-26 03:01:53.503710 | orchestrator | 03:01:53.503 STDOUT terraform:  + destination_type = "volume" 2025-05-26 03:01:53.503744 | orchestrator | 03:01:53.503 STDOUT terraform:  + multiattach = false 2025-05-26 03:01:53.503769 | orchestrator | 03:01:53.503 STDOUT terraform:  + source_type = "volume" 2025-05-26 03:01:53.503810 | orchestrator | 03:01:53.503 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.503816 | orchestrator | 03:01:53.503 STDOUT terraform:  } 2025-05-26 03:01:53.503832 | orchestrator | 03:01:53.503 STDOUT terraform:  + network { 2025-05-26 03:01:53.503848 | orchestrator | 03:01:53.503 STDOUT terraform:  + access_network = false 2025-05-26 03:01:53.503881 | orchestrator | 03:01:53.503 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-26 03:01:53.503912 | orchestrator | 03:01:53.503 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-26 03:01:53.503945 | orchestrator | 03:01:53.503 STDOUT terraform:  + mac = (known after apply) 2025-05-26 03:01:53.503977 | orchestrator | 03:01:53.503 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.504010 | orchestrator | 03:01:53.503 STDOUT terraform:  + port = (known after apply) 2025-05-26 03:01:53.504041 | orchestrator | 03:01:53.504 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.504052 | orchestrator | 03:01:53.504 STDOUT terraform:  } 2025-05-26 03:01:53.504057 | orchestrator | 03:01:53.504 STDOUT terraform:  } 2025-05-26 03:01:53.504108 | orchestrator | 03:01:53.504 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-26 03:01:53.504151 | orchestrator | 03:01:53.504 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-26 03:01:53.504188 | orchestrator | 03:01:53.504 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-26 03:01:53.504223 | orchestrator | 03:01:53.504 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-26 03:01:53.504259 | orchestrator | 03:01:53.504 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-26 03:01:53.504295 | orchestrator | 03:01:53.504 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.504319 | orchestrator | 03:01:53.504 STDOUT terraform:  + availability_zone = "nova" 2025-05-26 03:01:53.504341 | orchestrator | 03:01:53.504 STDOUT terraform:  + config_drive = true 2025-05-26 03:01:53.504377 | orchestrator | 03:01:53.504 STDOUT terraform:  + created = (known after apply) 2025-05-26 03:01:53.504414 | orchestrator | 03:01:53.504 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-26 03:01:53.504445 | orchestrator | 03:01:53.504 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-26 03:01:53.504483 | orchestrator | 03:01:53.504 STDOUT terraform:  + force_delete = false 2025-05-26 03:01:53.504546 | orchestrator | 03:01:53.504 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-26 03:01:53.504591 | orchestrator | 03:01:53.504 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.504629 | orchestrator | 03:01:53.504 STDOUT terraform:  + image_id = (known after apply) 2025-05-26 03:01:53.504663 | orchestrator | 03:01:53.504 STDOUT terraform:  + image_name = (known after apply) 2025-05-26 03:01:53.504689 | orchestrator | 03:01:53.504 STDOUT terraform:  + key_pair = "testbed" 2025-05-26 03:01:53.504723 | orchestrator | 03:01:53.504 STDOUT terraform:  + name = "testbed-node-5" 2025-05-26 03:01:53.504749 | orchestrator | 03:01:53.504 STDOUT terraform:  + power_state = "active" 2025-05-26 03:01:53.504785 | orchestrator | 03:01:53.504 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.504821 | orchestrator | 03:01:53.504 STDOUT terraform:  + security_groups = (known after apply) 2025-05-26 03:01:53.504847 | orchestrator | 03:01:53.504 STDOUT terraform:  + stop_before_destroy = false 2025-05-26 03:01:53.504885 | orchestrator | 03:01:53.504 STDOUT terraform:  + updated = (known after apply) 2025-05-26 03:01:53.504939 | orchestrator | 03:01:53.504 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-26 03:01:53.504949 | orchestrator | 03:01:53.504 STDOUT terraform:  + block_device { 2025-05-26 03:01:53.504979 | orchestrator | 03:01:53.504 STDOUT terraform:  + boot_index = 0 2025-05-26 03:01:53.505007 | orchestrator | 03:01:53.504 STDOUT terraform:  + delete_on_termination = false 2025-05-26 03:01:53.505037 | orchestrator | 03:01:53.504 STDOUT terraform:  + destination_type = "volume" 2025-05-26 03:01:53.505064 | orchestrator | 03:01:53.505 STDOUT terraform:  + multiattach = false 2025-05-26 03:01:53.505092 | orchestrator | 03:01:53.505 STDOUT terraform:  + source_type = "volume" 2025-05-26 03:01:53.505132 | orchestrator | 03:01:53.505 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.505138 | orchestrator | 03:01:53.505 STDOUT terraform:  } 2025-05-26 03:01:53.505154 | orchestrator | 03:01:53.505 STDOUT terraform:  + network { 2025-05-26 03:01:53.505175 | orchestrator | 03:01:53.505 STDOUT terraform:  + access_network = false 2025-05-26 03:01:53.505208 | orchestrator | 03:01:53.505 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-26 03:01:53.505239 | orchestrator | 03:01:53.505 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-26 03:01:53.505273 | orchestrator | 03:01:53.505 STDOUT terraform:  + mac = (known after apply) 2025-05-26 03:01:53.505305 | orchestrator | 03:01:53.505 STDOUT terraform:  + name = (known after apply) 2025-05-26 03:01:53.505337 | orchestrator | 03:01:53.505 STDOUT terraform:  + port = (known after apply) 2025-05-26 03:01:53.505369 | orchestrator | 03:01:53.505 STDOUT terraform:  + uuid = (known after apply) 2025-05-26 03:01:53.505375 | orchestrator | 03:01:53.505 STDOUT terraform:  } 2025-05-26 03:01:53.505390 | orchestrator | 03:01:53.505 STDOUT terraform:  } 2025-05-26 03:01:53.505425 | orchestrator | 03:01:53.505 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-26 03:01:53.505463 | orchestrator | 03:01:53.505 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-26 03:01:53.505490 | orchestrator | 03:01:53.505 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-26 03:01:53.505534 | orchestrator | 03:01:53.505 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.505564 | orchestrator | 03:01:53.505 STDOUT terraform:  + name = "testbed" 2025-05-26 03:01:53.505598 | orchestrator | 03:01:53.505 STDOUT terraform:  + private_key = (sensitive value) 2025-05-26 03:01:53.505646 | orchestrator | 03:01:53.505 STDOUT terraform:  + public_key = (known after apply) 2025-05-26 03:01:53.505677 | orchestrator | 03:01:53.505 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.505710 | orchestrator | 03:01:53.505 STDOUT terraform:  + user_id = (known after apply) 2025-05-26 03:01:53.505716 | orchestrator | 03:01:53.505 STDOUT terraform:  } 2025-05-26 03:01:53.505772 | orchestrator | 03:01:53.505 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-26 03:01:53.505822 | orchestrator | 03:01:53.505 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.505851 | orchestrator | 03:01:53.505 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.505880 | orchestrator | 03:01:53.505 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.505909 | orchestrator | 03:01:53.505 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.505938 | orchestrator | 03:01:53.505 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.505967 | orchestrator | 03:01:53.505 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.505977 | orchestrator | 03:01:53.505 STDOUT terraform:  } 2025-05-26 03:01:53.506049 | orchestrator | 03:01:53.505 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-26 03:01:53.506103 | orchestrator | 03:01:53.506 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.506130 | orchestrator | 03:01:53.506 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.506160 | orchestrator | 03:01:53.506 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.506189 | orchestrator | 03:01:53.506 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.506219 | orchestrator | 03:01:53.506 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.506254 | orchestrator | 03:01:53.506 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.506270 | orchestrator | 03:01:53.506 STDOUT terraform:  } 2025-05-26 03:01:53.506321 | orchestrator | 03:01:53.506 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-26 03:01:53.506372 | orchestrator | 03:01:53.506 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.506402 | orchestrator | 03:01:53.506 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.506434 | orchestrator | 03:01:53.506 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.506462 | orchestrator | 03:01:53.506 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.506498 | orchestrator | 03:01:53.506 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.506535 | orchestrator | 03:01:53.506 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.506552 | orchestrator | 03:01:53.506 STDOUT terraform:  } 2025-05-26 03:01:53.506602 | orchestrator | 03:01:53.506 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-26 03:01:53.506652 | orchestrator | 03:01:53.506 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.506698 | orchestrator | 03:01:53.506 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.506731 | orchestrator | 03:01:53.506 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.506760 | orchestrator | 03:01:53.506 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.506791 | orchestrator | 03:01:53.506 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.506820 | orchestrator | 03:01:53.506 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.506827 | orchestrator | 03:01:53.506 STDOUT terraform:  } 2025-05-26 03:01:53.506883 | orchestrator | 03:01:53.506 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-26 03:01:53.506934 | orchestrator | 03:01:53.506 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.506964 | orchestrator | 03:01:53.506 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.506994 | orchestrator | 03:01:53.506 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.507024 | orchestrator | 03:01:53.506 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.507053 | orchestrator | 03:01:53.507 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.507099 | orchestrator | 03:01:53.507 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.507105 | orchestrator | 03:01:53.507 STDOUT terraform:  } 2025-05-26 03:01:53.507161 | orchestrator | 03:01:53.507 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-26 03:01:53.507212 | orchestrator | 03:01:53.507 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.507241 | orchestrator | 03:01:53.507 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.507270 | orchestrator | 03:01:53.507 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.507299 | orchestrator | 03:01:53.507 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.507327 | orchestrator | 03:01:53.507 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.507356 | orchestrator | 03:01:53.507 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.507362 | orchestrator | 03:01:53.507 STDOUT terraform:  } 2025-05-26 03:01:53.507417 | orchestrator | 03:01:53.507 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-26 03:01:53.507467 | orchestrator | 03:01:53.507 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.507498 | orchestrator | 03:01:53.507 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.507542 | orchestrator | 03:01:53.507 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.507572 | orchestrator | 03:01:53.507 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.507601 | orchestrator | 03:01:53.507 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.507630 | orchestrator | 03:01:53.507 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.507636 | orchestrator | 03:01:53.507 STDOUT terraform:  } 2025-05-26 03:01:53.507690 | orchestrator | 03:01:53.507 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-26 03:01:53.507743 | orchestrator | 03:01:53.507 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.507788 | orchestrator | 03:01:53.507 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.507818 | orchestrator | 03:01:53.507 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.507849 | orchestrator | 03:01:53.507 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.507879 | orchestrator | 03:01:53.507 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.507909 | orchestrator | 03:01:53.507 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.507915 | orchestrator | 03:01:53.507 STDOUT terraform:  } 2025-05-26 03:01:53.507969 | orchestrator | 03:01:53.507 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-26 03:01:53.508019 | orchestrator | 03:01:53.507 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-26 03:01:53.508048 | orchestrator | 03:01:53.508 STDOUT terraform:  + device = (known after apply) 2025-05-26 03:01:53.508077 | orchestrator | 03:01:53.508 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.508107 | orchestrator | 03:01:53.508 STDOUT terraform:  + instance_id = (known after apply) 2025-05-26 03:01:53.508135 | orchestrator | 03:01:53.508 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.508165 | orchestrator | 03:01:53.508 STDOUT terraform:  + volume_id = (known after apply) 2025-05-26 03:01:53.508171 | orchestrator | 03:01:53.508 STDOUT terraform:  } 2025-05-26 03:01:53.508236 | orchestrator | 03:01:53.508 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-26 03:01:53.508294 | orchestrator | 03:01:53.508 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-26 03:01:53.508323 | orchestrator | 03:01:53.508 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-26 03:01:53.508351 | orchestrator | 03:01:53.508 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-26 03:01:53.508381 | orchestrator | 03:01:53.508 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.508410 | orchestrator | 03:01:53.508 STDOUT terraform:  + port_id = (known after apply) 2025-05-26 03:01:53.508439 | orchestrator | 03:01:53.508 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.508445 | orchestrator | 03:01:53.508 STDOUT terraform:  } 2025-05-26 03:01:53.508497 | orchestrator | 03:01:53.508 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-26 03:01:53.508595 | orchestrator | 03:01:53.508 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-26 03:01:53.508623 | orchestrator | 03:01:53.508 STDOUT terraform:  + address = (known after apply) 2025-05-26 03:01:53.508649 | orchestrator | 03:01:53.508 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.508675 | orchestrator | 03:01:53.508 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-26 03:01:53.508701 | orchestrator | 03:01:53.508 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.508728 | orchestrator | 03:01:53.508 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-26 03:01:53.508754 | orchestrator | 03:01:53.508 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.508777 | orchestrator | 03:01:53.508 STDOUT terraform:  + pool = "public" 2025-05-26 03:01:53.508804 | orchestrator | 03:01:53.508 STDOUT terraform:  + port_id = (known after apply) 2025-05-26 03:01:53.508831 | orchestrator | 03:01:53.508 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.508860 | orchestrator | 03:01:53.508 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.508884 | orchestrator | 03:01:53.508 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.508890 | orchestrator | 03:01:53.508 STDOUT terraform:  } 2025-05-26 03:01:53.508939 | orchestrator | 03:01:53.508 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-26 03:01:53.508985 | orchestrator | 03:01:53.508 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-26 03:01:53.509025 | orchestrator | 03:01:53.508 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.509064 | orchestrator | 03:01:53.509 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.509088 | orchestrator | 03:01:53.509 STDOUT terraform:  + availability_zone_hints = [ 2025-05-26 03:01:53.509102 | orchestrator | 03:01:53.509 STDOUT terraform:  + "nova", 2025-05-26 03:01:53.509118 | orchestrator | 03:01:53.509 STDOUT terraform:  ] 2025-05-26 03:01:53.509156 | orchestrator | 03:01:53.509 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-26 03:01:53.509195 | orchestrator | 03:01:53.509 STDOUT terraform:  + external = (known after apply) 2025-05-26 03:01:53.509235 | orchestrator | 03:01:53.509 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.509275 | orchestrator | 03:01:53.509 STDOUT terraform:  + mtu = (known after apply) 2025-05-26 03:01:53.509315 | orchestrator | 03:01:53.509 STDOUT terraform:  + name = "net-testbed-management" 2025-05-26 03:01:53.509352 | orchestrator | 03:01:53.509 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.509389 | orchestrator | 03:01:53.509 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.509427 | orchestrator | 03:01:53.509 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.509466 | orchestrator | 03:01:53.509 STDOUT terraform:  + shared = (known after apply) 2025-05-26 03:01:53.509504 | orchestrator | 03:01:53.509 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.509557 | orchestrator | 03:01:53.509 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-26 03:01:53.509584 | orchestrator | 03:01:53.509 STDOUT terraform:  + segments (known after apply) 2025-05-26 03:01:53.509591 | orchestrator | 03:01:53.509 STDOUT terraform:  } 2025-05-26 03:01:53.509640 | orchestrator | 03:01:53.509 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-26 03:01:53.509688 | orchestrator | 03:01:53.509 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-26 03:01:53.509726 | orchestrator | 03:01:53.509 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.509762 | orchestrator | 03:01:53.509 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-26 03:01:53.509798 | orchestrator | 03:01:53.509 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-26 03:01:53.509852 | orchestrator | 03:01:53.509 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.509894 | orchestrator | 03:01:53.509 STDOUT terraform:  + device_id = (known after apply) 2025-05-26 03:01:53.509933 | orchestrator | 03:01:53.509 STDOUT terraform:  + device_owner = (known after apply) 2025-05-26 03:01:53.509970 | orchestrator | 03:01:53.509 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-26 03:01:53.510007 | orchestrator | 03:01:53.509 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.510059 | orchestrator | 03:01:53.510 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.510097 | orchestrator | 03:01:53.510 STDOUT terraform:  + mac_address = (known after apply) 2025-05-26 03:01:53.510134 | orchestrator | 03:01:53.510 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.510170 | orchestrator | 03:01:53.510 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.510206 | orchestrator | 03:01:53.510 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.510249 | orchestrator | 03:01:53.510 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.510285 | orchestrator | 03:01:53.510 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-26 03:01:53.510323 | orchestrator | 03:01:53.510 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.510349 | orchestrator | 03:01:53.510 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.510396 | orchestrator | 03:01:53.510 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-26 03:01:53.510411 | orchestrator | 03:01:53.510 STDOUT terraform:  } 2025-05-26 03:01:53.510434 | orchestrator | 03:01:53.510 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.510465 | orchestrator | 03:01:53.510 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-26 03:01:53.510474 | orchestrator | 03:01:53.510 STDOUT terraform:  } 2025-05-26 03:01:53.510566 | orchestrator | 03:01:53.510 STDOUT terraform:  + binding (known after apply) 2025-05-26 03:01:53.510573 | orchestrator | 03:01:53.510 STDOUT terraform:  + fixed_ip { 2025-05-26 03:01:53.510577 | orchestrator | 03:01:53.510 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-26 03:01:53.510599 | orchestrator | 03:01:53.510 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.510605 | orchestrator | 03:01:53.510 STDOUT terraform:  } 2025-05-26 03:01:53.510619 | orchestrator | 03:01:53.510 STDOUT terraform:  } 2025-05-26 03:01:53.510675 | orchestrator | 03:01:53.510 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-26 03:01:53.510724 | orchestrator | 03:01:53.510 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-26 03:01:53.510762 | orchestrator | 03:01:53.510 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.510800 | orchestrator | 03:01:53.510 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-26 03:01:53.510836 | orchestrator | 03:01:53.510 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-26 03:01:53.510875 | orchestrator | 03:01:53.510 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.510926 | orchestrator | 03:01:53.510 STDOUT terraform:  + device_id = (known after apply) 2025-05-26 03:01:53.510976 | orchestrator | 03:01:53.510 STDOUT terraform:  + device_owner = (known after apply) 2025-05-26 03:01:53.511013 | orchestrator | 03:01:53.510 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-26 03:01:53.511052 | orchestrator | 03:01:53.511 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.511090 | orchestrator | 03:01:53.511 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.511127 | orchestrator | 03:01:53.511 STDOUT terraform:  + mac_address = (known after apply) 2025-05-26 03:01:53.511167 | orchestrator | 03:01:53.511 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.511205 | orchestrator | 03:01:53.511 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.511243 | orchestrator | 03:01:53.511 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.511281 | orchestrator | 03:01:53.511 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.511319 | orchestrator | 03:01:53.511 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-26 03:01:53.511357 | orchestrator | 03:01:53.511 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.511379 | orchestrator | 03:01:53.511 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.511409 | orchestrator | 03:01:53.511 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-26 03:01:53.511415 | orchestrator | 03:01:53.511 STDOUT terraform:  } 2025-05-26 03:01:53.511440 | orchestrator | 03:01:53.511 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.511469 | orchestrator | 03:01:53.511 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-26 03:01:53.511475 | orchestrator | 03:01:53.511 STDOUT terraform:  } 2025-05-26 03:01:53.511500 | orchestrator | 03:01:53.511 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.511541 | orchestrator | 03:01:53.511 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-26 03:01:53.511547 | orchestrator | 03:01:53.511 STDOUT terraform:  } 2025-05-26 03:01:53.511571 | orchestrator | 03:01:53.511 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.511608 | orchestrator | 03:01:53.511 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-26 03:01:53.511614 | orchestrator | 03:01:53.511 STDOUT terraform:  } 2025-05-26 03:01:53.511642 | orchestrator | 03:01:53.511 STDOUT terraform:  + binding (known after apply) 2025-05-26 03:01:53.511648 | orchestrator | 03:01:53.511 STDOUT terraform:  + fixed_ip { 2025-05-26 03:01:53.511678 | orchestrator | 03:01:53.511 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-26 03:01:53.511709 | orchestrator | 03:01:53.511 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.511715 | orchestrator | 03:01:53.511 STDOUT terraform:  } 2025-05-26 03:01:53.511732 | orchestrator | 03:01:53.511 STDOUT terraform:  } 2025-05-26 03:01:53.511777 | orchestrator | 03:01:53.511 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-26 03:01:53.511825 | orchestrator | 03:01:53.511 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-26 03:01:53.511863 | orchestrator | 03:01:53.511 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.511901 | orchestrator | 03:01:53.511 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-26 03:01:53.511936 | orchestrator | 03:01:53.511 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-26 03:01:53.511980 | orchestrator | 03:01:53.511 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.512041 | orchestrator | 03:01:53.511 STDOUT terraform:  + device_id = (known after apply) 2025-05-26 03:01:53.512081 | orchestrator | 03:01:53.512 STDOUT terraform:  + device_owner = (known after apply) 2025-05-26 03:01:53.512120 | orchestrator | 03:01:53.512 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-26 03:01:53.512157 | orchestrator | 03:01:53.512 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.512195 | orchestrator | 03:01:53.512 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.512233 | orchestrator | 03:01:53.512 STDOUT terraform:  + mac_address = (known after apply) 2025-05-26 03:01:53.512269 | orchestrator | 03:01:53.512 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.512306 | orchestrator | 03:01:53.512 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.512343 | orchestrator | 03:01:53.512 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.512381 | orchestrator | 03:01:53.512 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.512417 | orchestrator | 03:01:53.512 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-26 03:01:53.512455 | orchestrator | 03:01:53.512 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.512471 | orchestrator | 03:01:53.512 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.512501 | orchestrator | 03:01:53.512 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-26 03:01:53.512507 | orchestrator | 03:01:53.512 STDOUT terraform:  } 2025-05-26 03:01:53.512558 | orchestrator | 03:01:53.512 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.512588 | orchestrator | 03:01:53.512 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-26 03:01:53.512594 | orchestrator | 03:01:53.512 STDOUT terraform:  } 2025-05-26 03:01:53.512620 | orchestrator | 03:01:53.512 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.512648 | orchestrator | 03:01:53.512 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-26 03:01:53.512655 | orchestrator | 03:01:53.512 STDOUT terraform:  } 2025-05-26 03:01:53.512679 | orchestrator | 03:01:53.512 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.512708 | orchestrator | 03:01:53.512 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-26 03:01:53.512714 | orchestrator | 03:01:53.512 STDOUT terraform:  } 2025-05-26 03:01:53.512744 | orchestrator | 03:01:53.512 STDOUT terraform:  + binding (known after apply) 2025-05-26 03:01:53.512750 | orchestrator | 03:01:53.512 STDOUT terraform:  + fixed_ip { 2025-05-26 03:01:53.512782 | orchestrator | 03:01:53.512 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-26 03:01:53.512788 | orchestrator | 03:01:53.512 STDOUT terraform:  + 2025-05-26 03:01:53.512855 | orchestrator | 03:01:53.512 STDOUT terraform: subnet_id = (known after apply) 2025-05-26 03:01:53.512866 | orchestrator | 03:01:53.512 STDOUT terraform:  } 2025-05-26 03:01:53.512871 | orchestrator | 03:01:53.512 STDOUT terraform:  } 2025-05-26 03:01:53.512923 | orchestrator | 03:01:53.512 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-26 03:01:53.512971 | orchestrator | 03:01:53.512 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-26 03:01:53.513008 | orchestrator | 03:01:53.512 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.513045 | orchestrator | 03:01:53.513 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-26 03:01:53.513097 | orchestrator | 03:01:53.513 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-26 03:01:53.513143 | orchestrator | 03:01:53.513 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.513182 | orchestrator | 03:01:53.513 STDOUT terraform:  + device_id = (known after apply) 2025-05-26 03:01:53.513220 | orchestrator | 03:01:53.513 STDOUT terraform:  + device_owner = (known after apply) 2025-05-26 03:01:53.513257 | orchestrator | 03:01:53.513 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-26 03:01:53.513295 | orchestrator | 03:01:53.513 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.513336 | orchestrator | 03:01:53.513 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.513373 | orchestrator | 03:01:53.513 STDOUT terraform:  + mac_address = (known after apply) 2025-05-26 03:01:53.513410 | orchestrator | 03:01:53.513 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.513448 | orchestrator | 03:01:53.513 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.513485 | orchestrator | 03:01:53.513 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.513536 | orchestrator | 03:01:53.513 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.513573 | orchestrator | 03:01:53.513 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-26 03:01:53.513611 | orchestrator | 03:01:53.513 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.513627 | orchestrator | 03:01:53.513 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.513657 | orchestrator | 03:01:53.513 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-26 03:01:53.513663 | orchestrator | 03:01:53.513 STDOUT terraform:  } 2025-05-26 03:01:53.513688 | orchestrator | 03:01:53.513 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.513718 | orchestrator | 03:01:53.513 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-26 03:01:53.513724 | orchestrator | 03:01:53.513 STDOUT terraform:  } 2025-05-26 03:01:53.513748 | orchestrator | 03:01:53.513 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.513783 | orchestrator | 03:01:53.513 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-26 03:01:53.513789 | orchestrator | 03:01:53.513 STDOUT terraform:  } 2025-05-26 03:01:53.513814 | orchestrator | 03:01:53.513 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.513843 | orchestrator | 03:01:53.513 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-26 03:01:53.513849 | orchestrator | 03:01:53.513 STDOUT terraform:  } 2025-05-26 03:01:53.513877 | orchestrator | 03:01:53.513 STDOUT terraform:  + binding (known after apply) 2025-05-26 03:01:53.513883 | orchestrator | 03:01:53.513 STDOUT terraform:  + fixed_ip { 2025-05-26 03:01:53.513912 | orchestrator | 03:01:53.513 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-26 03:01:53.513942 | orchestrator | 03:01:53.513 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.513948 | orchestrator | 03:01:53.513 STDOUT terraform:  } 2025-05-26 03:01:53.513964 | orchestrator | 03:01:53.513 STDOUT terraform:  } 2025-05-26 03:01:53.514011 | orchestrator | 03:01:53.513 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-26 03:01:53.514073 | orchestrator | 03:01:53.514 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-26 03:01:53.514112 | orchestrator | 03:01:53.514 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.514157 | orchestrator | 03:01:53.514 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-26 03:01:53.514214 | orchestrator | 03:01:53.514 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-26 03:01:53.514254 | orchestrator | 03:01:53.514 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.514292 | orchestrator | 03:01:53.514 STDOUT terraform:  + device_id = (known after apply) 2025-05-26 03:01:53.514329 | orchestrator | 03:01:53.514 STDOUT terraform:  + device_owner = (known after apply) 2025-05-26 03:01:53.514367 | orchestrator | 03:01:53.514 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-26 03:01:53.514405 | orchestrator | 03:01:53.514 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.514449 | orchestrator | 03:01:53.514 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.514486 | orchestrator | 03:01:53.514 STDOUT terraform:  + mac_address = (known after apply) 2025-05-26 03:01:53.514684 | orchestrator | 03:01:53.514 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.514774 | orchestrator | 03:01:53.514 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.514788 | orchestrator | 03:01:53.514 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.514812 | orchestrator | 03:01:53.514 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.514824 | orchestrator | 03:01:53.514 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-26 03:01:53.514835 | orchestrator | 03:01:53.514 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.514846 | orchestrator | 03:01:53.514 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.514858 | orchestrator | 03:01:53.514 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-26 03:01:53.514870 | orchestrator | 03:01:53.514 STDOUT terraform:  } 2025-05-26 03:01:53.514881 | orchestrator | 03:01:53.514 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.514913 | orchestrator | 03:01:53.514 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-26 03:01:53.514929 | orchestrator | 03:01:53.514 STDOUT terraform:  } 2025-05-26 03:01:53.514940 | orchestrator | 03:01:53.514 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.514951 | orchestrator | 03:01:53.514 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-26 03:01:53.514962 | orchestrator | 03:01:53.514 STDOUT terraform:  } 2025-05-26 03:01:53.514973 | orchestrator | 03:01:53.514 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.514984 | orchestrator | 03:01:53.514 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-26 03:01:53.514995 | orchestrator | 03:01:53.514 STDOUT terraform:  } 2025-05-26 03:01:53.515010 | orchestrator | 03:01:53.514 STDOUT terraform:  + binding (known after apply) 2025-05-26 03:01:53.515021 | orchestrator | 03:01:53.514 STDOUT terraform:  + fixed_ip { 2025-05-26 03:01:53.515032 | orchestrator | 03:01:53.514 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-26 03:01:53.515043 | orchestrator | 03:01:53.514 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.515058 | orchestrator | 03:01:53.514 STDOUT terraform:  } 2025-05-26 03:01:53.515070 | orchestrator | 03:01:53.515 STDOUT terraform:  } 2025-05-26 03:01:53.515085 | orchestrator | 03:01:53.515 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-26 03:01:53.515156 | orchestrator | 03:01:53.515 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-26 03:01:53.515170 | orchestrator | 03:01:53.515 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.515185 | orchestrator | 03:01:53.515 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-26 03:01:53.515226 | orchestrator | 03:01:53.515 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-26 03:01:53.515334 | orchestrator | 03:01:53.515 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.515359 | orchestrator | 03:01:53.515 STDOUT terraform:  + device_id = (known after apply) 2025-05-26 03:01:53.515366 | orchestrator | 03:01:53.515 STDOUT terraform:  + device_owner = (known after apply) 2025-05-26 03:01:53.515386 | orchestrator | 03:01:53.515 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-26 03:01:53.515423 | orchestrator | 03:01:53.515 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.515463 | orchestrator | 03:01:53.515 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.515501 | orchestrator | 03:01:53.515 STDOUT terraform:  + mac_address = (known after apply) 2025-05-26 03:01:53.515551 | orchestrator | 03:01:53.515 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.515589 | orchestrator | 03:01:53.515 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.515628 | orchestrator | 03:01:53.515 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.515669 | orchestrator | 03:01:53.515 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.515703 | orchestrator | 03:01:53.515 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-26 03:01:53.515740 | orchestrator | 03:01:53.515 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.515758 | orchestrator | 03:01:53.515 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.515788 | orchestrator | 03:01:53.515 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-26 03:01:53.515796 | orchestrator | 03:01:53.515 STDOUT terraform:  } 2025-05-26 03:01:53.515817 | orchestrator | 03:01:53.515 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.515848 | orchestrator | 03:01:53.515 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-26 03:01:53.515854 | orchestrator | 03:01:53.515 STDOUT terraform:  } 2025-05-26 03:01:53.515892 | orchestrator | 03:01:53.515 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.515922 | orchestrator | 03:01:53.515 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-26 03:01:53.515928 | orchestrator | 03:01:53.515 STDOUT terraform:  } 2025-05-26 03:01:53.515952 | orchestrator | 03:01:53.515 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.515980 | orchestrator | 03:01:53.515 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-26 03:01:53.515987 | orchestrator | 03:01:53.515 STDOUT terraform:  } 2025-05-26 03:01:53.516015 | orchestrator | 03:01:53.515 STDOUT terraform:  + binding (known after apply) 2025-05-26 03:01:53.516021 | orchestrator | 03:01:53.516 STDOUT terraform:  + fixed_ip { 2025-05-26 03:01:53.516051 | orchestrator | 03:01:53.516 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-26 03:01:53.516082 | orchestrator | 03:01:53.516 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.516088 | orchestrator | 03:01:53.516 STDOUT terraform:  } 2025-05-26 03:01:53.516105 | orchestrator | 03:01:53.516 STDOUT terraform:  } 2025-05-26 03:01:53.516151 | orchestrator | 03:01:53.516 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-26 03:01:53.516197 | orchestrator | 03:01:53.516 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-26 03:01:53.516236 | orchestrator | 03:01:53.516 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.516275 | orchestrator | 03:01:53.516 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-26 03:01:53.516320 | orchestrator | 03:01:53.516 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-26 03:01:53.516379 | orchestrator | 03:01:53.516 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.516417 | orchestrator | 03:01:53.516 STDOUT terraform:  + device_id = (known after apply) 2025-05-26 03:01:53.516455 | orchestrator | 03:01:53.516 STDOUT terraform:  + device_owner = (known after apply) 2025-05-26 03:01:53.516491 | orchestrator | 03:01:53.516 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-26 03:01:53.516557 | orchestrator | 03:01:53.516 STDOUT terraform:  + dns_name = (known after apply) 2025-05-26 03:01:53.516596 | orchestrator | 03:01:53.516 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.516635 | orchestrator | 03:01:53.516 STDOUT terraform:  + mac_address = (known after apply) 2025-05-26 03:01:53.516674 | orchestrator | 03:01:53.516 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.516711 | orchestrator | 03:01:53.516 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-26 03:01:53.516751 | orchestrator | 03:01:53.516 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-26 03:01:53.516790 | orchestrator | 03:01:53.516 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.516829 | orchestrator | 03:01:53.516 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-26 03:01:53.516867 | orchestrator | 03:01:53.516 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.516884 | orchestrator | 03:01:53.516 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.516906 | orchestrator | 03:01:53.516 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-26 03:01:53.516913 | orchestrator | 03:01:53.516 STDOUT terraform:  } 2025-05-26 03:01:53.516937 | orchestrator | 03:01:53.516 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.516968 | orchestrator | 03:01:53.516 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-26 03:01:53.516974 | orchestrator | 03:01:53.516 STDOUT terraform:  } 2025-05-26 03:01:53.516998 | orchestrator | 03:01:53.516 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.517028 | orchestrator | 03:01:53.516 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-26 03:01:53.517034 | orchestrator | 03:01:53.517 STDOUT terraform:  } 2025-05-26 03:01:53.517060 | orchestrator | 03:01:53.517 STDOUT terraform:  + allowed_address_pairs { 2025-05-26 03:01:53.517089 | orchestrator | 03:01:53.517 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-26 03:01:53.517096 | orchestrator | 03:01:53.517 STDOUT terraform:  } 2025-05-26 03:01:53.517125 | orchestrator | 03:01:53.517 STDOUT terraform:  + binding (known after apply) 2025-05-26 03:01:53.517131 | orchestrator | 03:01:53.517 STDOUT terraform:  + fixed_ip { 2025-05-26 03:01:53.517161 | orchestrator | 03:01:53.517 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-26 03:01:53.517193 | orchestrator | 03:01:53.517 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.517200 | orchestrator | 03:01:53.517 STDOUT terraform:  } 2025-05-26 03:01:53.517215 | orchestrator | 03:01:53.517 STDOUT terraform:  } 2025-05-26 03:01:53.517265 | orchestrator | 03:01:53.517 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-26 03:01:53.517315 | orchestrator | 03:01:53.517 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-26 03:01:53.517332 | orchestrator | 03:01:53.517 STDOUT terraform:  + force_destroy = false 2025-05-26 03:01:53.517362 | orchestrator | 03:01:53.517 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.517397 | orchestrator | 03:01:53.517 STDOUT terraform:  + port_id = (known after apply) 2025-05-26 03:01:53.517447 | orchestrator | 03:01:53.517 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.517478 | orchestrator | 03:01:53.517 STDOUT terraform:  + router_id = (known after apply) 2025-05-26 03:01:53.517523 | orchestrator | 03:01:53.517 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-26 03:01:53.517533 | orchestrator | 03:01:53.517 STDOUT terraform:  } 2025-05-26 03:01:53.517659 | orchestrator | 03:01:53.517 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-26 03:01:53.517698 | orchestrator | 03:01:53.517 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-26 03:01:53.517719 | orchestrator | 03:01:53.517 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-26 03:01:53.517734 | orchestrator | 03:01:53.517 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.517746 | orchestrator | 03:01:53.517 STDOUT terraform:  + availability_zone_hints = [ 2025-05-26 03:01:53.517759 | orchestrator | 03:01:53.517 STDOUT terraform:  + "nova", 2025-05-26 03:01:53.517779 | orchestrator | 03:01:53.517 STDOUT terraform:  ] 2025-05-26 03:01:53.517791 | orchestrator | 03:01:53.517 STDOUT terraform:  + distributed = (known after apply) 2025-05-26 03:01:53.517806 | orchestrator | 03:01:53.517 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-26 03:01:53.517849 | orchestrator | 03:01:53.517 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-26 03:01:53.517903 | orchestrator | 03:01:53.517 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.517917 | orchestrator | 03:01:53.517 STDOUT terraform:  + name = "testbed" 2025-05-26 03:01:53.517954 | orchestrator | 03:01:53.517 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.517971 | orchestrator | 03:01:53.517 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.518046 | orchestrator | 03:01:53.517 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-26 03:01:53.518063 | orchestrator | 03:01:53.517 STDOUT terraform:  } 2025-05-26 03:01:53.518105 | orchestrator | 03:01:53.518 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-26 03:01:53.518192 | orchestrator | 03:01:53.518 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-26 03:01:53.518231 | orchestrator | 03:01:53.518 STDOUT terraform:  + description = "ssh" 2025-05-26 03:01:53.518268 | orchestrator | 03:01:53.518 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.518305 | orchestrator | 03:01:53.518 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.518343 | orchestrator | 03:01:53.518 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.518360 | orchestrator | 03:01:53.518 STDOUT terraform:  + port_range_max = 22 2025-05-26 03:01:53.518402 | orchestrator | 03:01:53.518 STDOUT terraform:  + port_range_min = 22 2025-05-26 03:01:53.518419 | orchestrator | 03:01:53.518 STDOUT terraform:  + protocol = "tcp" 2025-05-26 03:01:53.518457 | orchestrator | 03:01:53.518 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.518506 | orchestrator | 03:01:53.518 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.518572 | orchestrator | 03:01:53.518 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-26 03:01:53.518584 | orchestrator | 03:01:53.518 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.518623 | orchestrator | 03:01:53.518 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.518636 | orchestrator | 03:01:53.518 STDOUT terraform:  } 2025-05-26 03:01:53.518684 | orchestrator | 03:01:53.518 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-26 03:01:53.518742 | orchestrator | 03:01:53.518 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-26 03:01:53.518759 | orchestrator | 03:01:53.518 STDOUT terraform:  + description = "wireguard" 2025-05-26 03:01:53.518774 | orchestrator | 03:01:53.518 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.518810 | orchestrator | 03:01:53.518 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.518827 | orchestrator | 03:01:53.518 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.518841 | orchestrator | 03:01:53.518 STDOUT terraform:  + port_range_max = 51820 2025-05-26 03:01:53.518877 | orchestrator | 03:01:53.518 STDOUT terraform:  + port_range_min = 51820 2025-05-26 03:01:53.518893 | orchestrator | 03:01:53.518 STDOUT terraform:  + protocol = "udp" 2025-05-26 03:01:53.518907 | orchestrator | 03:01:53.518 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.518949 | orchestrator | 03:01:53.518 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.518966 | orchestrator | 03:01:53.518 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-26 03:01:53.519002 | orchestrator | 03:01:53.518 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.519018 | orchestrator | 03:01:53.518 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.519045 | orchestrator | 03:01:53.519 STDOUT terraform:  } 2025-05-26 03:01:53.519095 | orchestrator | 03:01:53.519 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-26 03:01:53.519150 | orchestrator | 03:01:53.519 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-26 03:01:53.519166 | orchestrator | 03:01:53.519 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.519180 | orchestrator | 03:01:53.519 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.519225 | orchestrator | 03:01:53.519 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.519243 | orchestrator | 03:01:53.519 STDOUT terraform:  + protocol = "tcp" 2025-05-26 03:01:53.519257 | orchestrator | 03:01:53.519 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.519298 | orchestrator | 03:01:53.519 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.519315 | orchestrator | 03:01:53.519 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-26 03:01:53.519355 | orchestrator | 03:01:53.519 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.519384 | orchestrator | 03:01:53.519 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.519407 | orchestrator | 03:01:53.519 STDOUT terraform:  } 2025-05-26 03:01:53.519453 | orchestrator | 03:01:53.519 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-26 03:01:53.519526 | orchestrator | 03:01:53.519 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-26 03:01:53.519573 | orchestrator | 03:01:53.519 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.519650 | orchestrator | 03:01:53.519 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.519664 | orchestrator | 03:01:53.519 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.519675 | orchestrator | 03:01:53.519 STDOUT terraform:  + protocol = "udp" 2025-05-26 03:01:53.519690 | orchestrator | 03:01:53.519 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.519704 | orchestrator | 03:01:53.519 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.519743 | orchestrator | 03:01:53.519 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-26 03:01:53.519759 | orchestrator | 03:01:53.519 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.519800 | orchestrator | 03:01:53.519 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.519817 | orchestrator | 03:01:53.519 STDOUT terraform:  } 2025-05-26 03:01:53.519868 | orchestrator | 03:01:53.519 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-26 03:01:53.519924 | orchestrator | 03:01:53.519 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-26 03:01:53.519941 | orchestrator | 03:01:53.519 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.519955 | orchestrator | 03:01:53.519 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.519997 | orchestrator | 03:01:53.519 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.520013 | orchestrator | 03:01:53.519 STDOUT terraform:  + protocol = "icmp" 2025-05-26 03:01:53.520039 | orchestrator | 03:01:53.520 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.520077 | orchestrator | 03:01:53.520 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.520093 | orchestrator | 03:01:53.520 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-26 03:01:53.520129 | orchestrator | 03:01:53.520 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.520145 | orchestrator | 03:01:53.520 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.520160 | orchestrator | 03:01:53.520 STDOUT terraform:  } 2025-05-26 03:01:53.520220 | orchestrator | 03:01:53.520 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-26 03:01:53.520272 | orchestrator | 03:01:53.520 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-26 03:01:53.520300 | orchestrator | 03:01:53.520 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.520316 | orchestrator | 03:01:53.520 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.520340 | orchestrator | 03:01:53.520 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.520374 | orchestrator | 03:01:53.520 STDOUT terraform:  + protocol = "tcp" 2025-05-26 03:01:53.520390 | orchestrator | 03:01:53.520 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.520438 | orchestrator | 03:01:53.520 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.520478 | orchestrator | 03:01:53.520 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-26 03:01:53.520537 | orchestrator | 03:01:53.520 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.520554 | orchestrator | 03:01:53.520 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.520566 | orchestrator | 03:01:53.520 STDOUT terraform:  } 2025-05-26 03:01:53.520731 | orchestrator | 03:01:53.520 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-26 03:01:53.520771 | orchestrator | 03:01:53.520 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-26 03:01:53.520785 | orchestrator | 03:01:53.520 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.520794 | orchestrator | 03:01:53.520 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.520803 | orchestrator | 03:01:53.520 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.520810 | orchestrator | 03:01:53.520 STDOUT terraform:  + protocol = "udp" 2025-05-26 03:01:53.520819 | orchestrator | 03:01:53.520 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.520829 | orchestrator | 03:01:53.520 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.520858 | orchestrator | 03:01:53.520 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-26 03:01:53.520892 | orchestrator | 03:01:53.520 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.520918 | orchestrator | 03:01:53.520 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.520928 | orchestrator | 03:01:53.520 STDOUT terraform:  } 2025-05-26 03:01:53.520996 | orchestrator | 03:01:53.520 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-26 03:01:53.521048 | orchestrator | 03:01:53.520 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-26 03:01:53.521073 | orchestrator | 03:01:53.521 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.521084 | orchestrator | 03:01:53.521 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.521125 | orchestrator | 03:01:53.521 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.521135 | orchestrator | 03:01:53.521 STDOUT terraform:  + protocol = "icmp" 2025-05-26 03:01:53.521174 | orchestrator | 03:01:53.521 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.521209 | orchestrator | 03:01:53.521 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.521253 | orchestrator | 03:01:53.521 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-26 03:01:53.521276 | orchestrator | 03:01:53.521 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.521314 | orchestrator | 03:01:53.521 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.521323 | orchestrator | 03:01:53.521 STDOUT terraform:  } 2025-05-26 03:01:53.521374 | orchestrator | 03:01:53.521 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-26 03:01:53.521427 | orchestrator | 03:01:53.521 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-26 03:01:53.521440 | orchestrator | 03:01:53.521 STDOUT terraform:  + description = "vrrp" 2025-05-26 03:01:53.521463 | orchestrator | 03:01:53.521 STDOUT terraform:  + direction = "ingress" 2025-05-26 03:01:53.521474 | orchestrator | 03:01:53.521 STDOUT terraform:  + ethertype = "IPv4" 2025-05-26 03:01:53.521527 | orchestrator | 03:01:53.521 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.521557 | orchestrator | 03:01:53.521 STDOUT terraform:  + protocol = "112" 2025-05-26 03:01:53.521582 | orchestrator | 03:01:53.521 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.521613 | orchestrator | 03:01:53.521 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-26 03:01:53.521637 | orchestrator | 03:01:53.521 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-26 03:01:53.521669 | orchestrator | 03:01:53.521 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-26 03:01:53.521701 | orchestrator | 03:01:53.521 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.521711 | orchestrator | 03:01:53.521 STDOUT terraform:  } 2025-05-26 03:01:53.521793 | orchestrator | 03:01:53.521 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-26 03:01:53.521846 | orchestrator | 03:01:53.521 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-26 03:01:53.521870 | orchestrator | 03:01:53.521 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.521906 | orchestrator | 03:01:53.521 STDOUT terraform:  + description = "management security group" 2025-05-26 03:01:53.521929 | orchestrator | 03:01:53.521 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.521961 | orchestrator | 03:01:53.521 STDOUT terraform:  + name = "testbed-management" 2025-05-26 03:01:53.521986 | orchestrator | 03:01:53.521 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.522011 | orchestrator | 03:01:53.521 STDOUT terraform:  + stateful = (known after apply) 2025-05-26 03:01:53.522064 | orchestrator | 03:01:53.522 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.522071 | orchestrator | 03:01:53.522 STDOUT terraform:  } 2025-05-26 03:01:53.522109 | orchestrator | 03:01:53.522 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-26 03:01:53.522158 | orchestrator | 03:01:53.522 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-26 03:01:53.522183 | orchestrator | 03:01:53.522 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.522207 | orchestrator | 03:01:53.522 STDOUT terraform:  + description = "node security group" 2025-05-26 03:01:53.522246 | orchestrator | 03:01:53.522 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.522256 | orchestrator | 03:01:53.522 STDOUT terraform:  + name = "testbed-node" 2025-05-26 03:01:53.522292 | orchestrator | 03:01:53.522 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.522318 | orchestrator | 03:01:53.522 STDOUT terraform:  + stateful = (known after apply) 2025-05-26 03:01:53.522343 | orchestrator | 03:01:53.522 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.522353 | orchestrator | 03:01:53.522 STDOUT terraform:  } 2025-05-26 03:01:53.522397 | orchestrator | 03:01:53.522 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-26 03:01:53.522443 | orchestrator | 03:01:53.522 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-26 03:01:53.522474 | orchestrator | 03:01:53.522 STDOUT terraform:  + all_tags = (known after apply) 2025-05-26 03:01:53.522507 | orchestrator | 03:01:53.522 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-26 03:01:53.522541 | orchestrator | 03:01:53.522 STDOUT terraform:  + dns_nameservers = [ 2025-05-26 03:01:53.522549 | orchestrator | 03:01:53.522 STDOUT terraform:  + "8.8.8.8", 2025-05-26 03:01:53.522558 | orchestrator | 03:01:53.522 STDOUT terraform:  + "9.9.9.9", 2025-05-26 03:01:53.522567 | orchestrator | 03:01:53.522 STDOUT terraform:  ] 2025-05-26 03:01:53.522590 | orchestrator | 03:01:53.522 STDOUT terraform:  + enable_dhcp = true 2025-05-26 03:01:53.522622 | orchestrator | 03:01:53.522 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-26 03:01:53.522652 | orchestrator | 03:01:53.522 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.522662 | orchestrator | 03:01:53.522 STDOUT terraform:  + ip_version = 4 2025-05-26 03:01:53.522700 | orchestrator | 03:01:53.522 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-26 03:01:53.522733 | orchestrator | 03:01:53.522 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-26 03:01:53.522771 | orchestrator | 03:01:53.522 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-26 03:01:53.522803 | orchestrator | 03:01:53.522 STDOUT terraform:  + network_id = (known after apply) 2025-05-26 03:01:53.522812 | orchestrator | 03:01:53.522 STDOUT terraform:  + no_gateway = false 2025-05-26 03:01:53.522851 | orchestrator | 03:01:53.522 STDOUT terraform:  + region = (known after apply) 2025-05-26 03:01:53.522883 | orchestrator | 03:01:53.522 STDOUT terraform:  + service_types = (known after apply) 2025-05-26 03:01:53.522914 | orchestrator | 03:01:53.522 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-26 03:01:53.522923 | orchestrator | 03:01:53.522 STDOUT terraform:  + allocation_pool { 2025-05-26 03:01:53.522951 | orchestrator | 03:01:53.522 STDOUT terraform:  + end = "192.168.31.250" 2025-05-26 03:01:53.522975 | orchestrator | 03:01:53.522 STDOUT terraform:  + start = "192.168.31.200" 2025-05-26 03:01:53.522981 | orchestrator | 03:01:53.522 STDOUT terraform:  } 2025-05-26 03:01:53.522996 | orchestrator | 03:01:53.522 STDOUT terraform:  } 2025-05-26 03:01:53.523023 | orchestrator | 03:01:53.522 STDOUT terraform:  # terraform_data.image will be created 2025-05-26 03:01:53.523068 | orchestrator | 03:01:53.523 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-26 03:01:53.523075 | orchestrator | 03:01:53.523 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.523080 | orchestrator | 03:01:53.523 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-26 03:01:53.523107 | orchestrator | 03:01:53.523 STDOUT terraform:  + output = (known after apply) 2025-05-26 03:01:53.523113 | orchestrator | 03:01:53.523 STDOUT terraform:  } 2025-05-26 03:01:53.523146 | orchestrator | 03:01:53.523 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-26 03:01:53.523174 | orchestrator | 03:01:53.523 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-26 03:01:53.523198 | orchestrator | 03:01:53.523 STDOUT terraform:  + id = (known after apply) 2025-05-26 03:01:53.523221 | orchestrator | 03:01:53.523 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-26 03:01:53.523247 | orchestrator | 03:01:53.523 STDOUT terraform:  + output = (known after apply) 2025-05-26 03:01:53.523253 | orchestrator | 03:01:53.523 STDOUT terraform:  } 2025-05-26 03:01:53.523287 | orchestrator | 03:01:53.523 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-26 03:01:53.523293 | orchestrator | 03:01:53.523 STDOUT terraform: Changes to Outputs: 2025-05-26 03:01:53.523323 | orchestrator | 03:01:53.523 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-26 03:01:53.523347 | orchestrator | 03:01:53.523 STDOUT terraform:  + private_key = (sensitive value) 2025-05-26 03:01:53.725314 | orchestrator | 03:01:53.725 STDOUT terraform: terraform_data.image: Creating... 2025-05-26 03:01:53.725409 | orchestrator | 03:01:53.725 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-26 03:01:53.725495 | orchestrator | 03:01:53.725 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=cc7855ea-0628-9a0c-fbe3-540e060f169b] 2025-05-26 03:01:53.733741 | orchestrator | 03:01:53.733 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=669b644e-4e83-38c0-7f46-edf510193529] 2025-05-26 03:01:53.741619 | orchestrator | 03:01:53.741 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-26 03:01:53.741698 | orchestrator | 03:01:53.741 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-26 03:01:53.750716 | orchestrator | 03:01:53.750 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-26 03:01:53.751792 | orchestrator | 03:01:53.751 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-26 03:01:53.751871 | orchestrator | 03:01:53.751 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-26 03:01:53.752095 | orchestrator | 03:01:53.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-26 03:01:53.752972 | orchestrator | 03:01:53.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-26 03:01:53.755055 | orchestrator | 03:01:53.754 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-26 03:01:53.759287 | orchestrator | 03:01:53.759 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-26 03:01:53.759347 | orchestrator | 03:01:53.759 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-26 03:01:54.188689 | orchestrator | 03:01:54.188 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-26 03:01:54.202459 | orchestrator | 03:01:54.202 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-26 03:01:54.206848 | orchestrator | 03:01:54.206 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-26 03:01:54.217868 | orchestrator | 03:01:54.217 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-26 03:01:54.317271 | orchestrator | 03:01:54.316 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-26 03:01:54.326659 | orchestrator | 03:01:54.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-26 03:01:59.789055 | orchestrator | 03:01:59.788 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=3085da11-cb2e-4a7a-bd51-bf3bf301d426] 2025-05-26 03:01:59.801759 | orchestrator | 03:01:59.801 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-26 03:02:03.752764 | orchestrator | 03:02:03.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-26 03:02:03.753725 | orchestrator | 03:02:03.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-26 03:02:03.753860 | orchestrator | 03:02:03.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-26 03:02:03.754014 | orchestrator | 03:02:03.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-26 03:02:03.760277 | orchestrator | 03:02:03.760 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-26 03:02:03.760407 | orchestrator | 03:02:03.760 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-26 03:02:04.203402 | orchestrator | 03:02:04.202 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-26 03:02:04.219616 | orchestrator | 03:02:04.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-26 03:02:04.328059 | orchestrator | 03:02:04.327 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-26 03:02:04.333165 | orchestrator | 03:02:04.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=8267a69f-7007-4a62-b03d-616d3aa09f53] 2025-05-26 03:02:04.340845 | orchestrator | 03:02:04.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-26 03:02:04.344248 | orchestrator | 03:02:04.343 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=2a6da8ab-439b-4c92-86f2-b8912a630d10] 2025-05-26 03:02:04.352085 | orchestrator | 03:02:04.351 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-26 03:02:04.359745 | orchestrator | 03:02:04.359 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=d6e6216b-cbe0-4182-a9d6-b0841cd13c95] 2025-05-26 03:02:04.361861 | orchestrator | 03:02:04.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=b8fa87d6-4bbf-4e23-9059-3efb42beefcf] 2025-05-26 03:02:04.367882 | orchestrator | 03:02:04.367 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-26 03:02:04.369921 | orchestrator | 03:02:04.369 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-26 03:02:04.386321 | orchestrator | 03:02:04.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=feee3a86-288f-4310-9e74-72f077da2d2c] 2025-05-26 03:02:04.388389 | orchestrator | 03:02:04.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=21cb62ce-763a-41a7-95e4-caebeb5b0a4b] 2025-05-26 03:02:04.392814 | orchestrator | 03:02:04.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-26 03:02:04.394499 | orchestrator | 03:02:04.394 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-26 03:02:04.443735 | orchestrator | 03:02:04.443 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=c087d35e-df49-49d8-817c-07623fd598fd] 2025-05-26 03:02:04.451391 | orchestrator | 03:02:04.451 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=ae6d7dd5-5925-42d7-939c-6a68dbf2df83] 2025-05-26 03:02:04.459589 | orchestrator | 03:02:04.459 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-26 03:02:04.465753 | orchestrator | 03:02:04.465 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-26 03:02:04.466095 | orchestrator | 03:02:04.465 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=d4410bec72af56b6f8cd404c14b5fb44ada6767d] 2025-05-26 03:02:04.473093 | orchestrator | 03:02:04.472 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=a560f722af0b4cf63eee80954fde8e2f035e794c] 2025-05-26 03:02:04.473646 | orchestrator | 03:02:04.473 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-26 03:02:04.538407 | orchestrator | 03:02:04.537 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=a2c1486d-cd17-4e79-bfde-447100a0feef] 2025-05-26 03:02:09.805106 | orchestrator | 03:02:09.804 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-26 03:02:10.125301 | orchestrator | 03:02:10.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=7a53b112-6d9c-456e-a83a-d8e66d9d8bb7] 2025-05-26 03:02:10.377321 | orchestrator | 03:02:10.376 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=d7875ee5-3dad-4104-ad92-ce486edc42c6] 2025-05-26 03:02:10.383242 | orchestrator | 03:02:10.383 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-26 03:02:14.342567 | orchestrator | 03:02:14.342 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-26 03:02:14.353800 | orchestrator | 03:02:14.353 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-26 03:02:14.369115 | orchestrator | 03:02:14.368 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-26 03:02:14.371416 | orchestrator | 03:02:14.371 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-26 03:02:14.394778 | orchestrator | 03:02:14.394 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-26 03:02:14.395058 | orchestrator | 03:02:14.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-26 03:02:14.702256 | orchestrator | 03:02:14.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=cb0ef7c9-a457-4314-80d8-8124f9b601d7] 2025-05-26 03:02:14.744325 | orchestrator | 03:02:14.743 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=a22053f6-7fcf-48d3-9817-9fbbcd6d287f] 2025-05-26 03:02:14.757241 | orchestrator | 03:02:14.756 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=9d4bdcb5-a0b8-4173-af9e-b961e366e943] 2025-05-26 03:02:14.777762 | orchestrator | 03:02:14.777 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=b7f3fd9d-1192-4258-8d4e-892581ff9bff] 2025-05-26 03:02:14.781209 | orchestrator | 03:02:14.780 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=abaa748c-97b3-4e70-8935-2e6927d8d198] 2025-05-26 03:02:15.562816 | orchestrator | 03:02:15.562 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 12s [id=ecdb3589-053a-4a72-b22e-6f1393b6e1c0] 2025-05-26 03:02:17.873241 | orchestrator | 03:02:17.872 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=54ca0f72-2289-4bb2-a05e-7d69fc227fdf] 2025-05-26 03:02:17.878965 | orchestrator | 03:02:17.878 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-26 03:02:17.880931 | orchestrator | 03:02:17.880 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-26 03:02:17.883672 | orchestrator | 03:02:17.883 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-26 03:02:18.073966 | orchestrator | 03:02:18.073 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9732b9f9-9bbe-4032-b769-6b0e30ef5ee8] 2025-05-26 03:02:18.090643 | orchestrator | 03:02:18.090 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a226eded-1ead-45fd-8cd4-376f36ae2096] 2025-05-26 03:02:18.090888 | orchestrator | 03:02:18.090 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-26 03:02:18.091669 | orchestrator | 03:02:18.091 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-26 03:02:18.091701 | orchestrator | 03:02:18.091 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-26 03:02:18.092180 | orchestrator | 03:02:18.092 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-26 03:02:18.094312 | orchestrator | 03:02:18.094 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-26 03:02:18.095906 | orchestrator | 03:02:18.095 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-26 03:02:18.100232 | orchestrator | 03:02:18.100 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-26 03:02:18.102922 | orchestrator | 03:02:18.102 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-26 03:02:18.106625 | orchestrator | 03:02:18.106 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-26 03:02:18.271958 | orchestrator | 03:02:18.271 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=83e7138d-9655-4db2-96dd-02d957d73592] 2025-05-26 03:02:18.272392 | orchestrator | 03:02:18.272 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=5533149d-b545-4e04-88c6-6dce5054bcde] 2025-05-26 03:02:18.278278 | orchestrator | 03:02:18.278 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-26 03:02:18.284230 | orchestrator | 03:02:18.284 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-26 03:02:18.424425 | orchestrator | 03:02:18.423 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=881dcc25-1897-4d46-affa-41594699e162] 2025-05-26 03:02:18.438900 | orchestrator | 03:02:18.438 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-26 03:02:18.469109 | orchestrator | 03:02:18.468 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=fe145fb9-81a0-46ea-873c-6dda2e61ddd2] 2025-05-26 03:02:18.485761 | orchestrator | 03:02:18.485 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-26 03:02:18.570427 | orchestrator | 03:02:18.569 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=0613cca7-036b-4550-8f4d-4dc03410a9b5] 2025-05-26 03:02:18.585452 | orchestrator | 03:02:18.585 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-26 03:02:18.754271 | orchestrator | 03:02:18.753 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=fed8815e-bbd7-4397-a5c6-080cbd9277aa] 2025-05-26 03:02:18.770812 | orchestrator | 03:02:18.770 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-26 03:02:18.800386 | orchestrator | 03:02:18.800 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=402b1057-c1ab-40c4-ada4-710adf2d55fd] 2025-05-26 03:02:18.806963 | orchestrator | 03:02:18.806 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-26 03:02:19.020195 | orchestrator | 03:02:19.019 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=dc534b2b-82aa-48d3-8833-3e6624731574] 2025-05-26 03:02:19.145283 | orchestrator | 03:02:19.144 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=d8252ba5-3204-4c3b-a924-565e1d8db0a3] 2025-05-26 03:02:24.085213 | orchestrator | 03:02:24.084 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=e69becee-ae83-4b9f-9cba-4b283765889f] 2025-05-26 03:02:24.104809 | orchestrator | 03:02:24.104 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=40adfb47-bcbe-419e-905d-88868e72c211] 2025-05-26 03:02:24.236971 | orchestrator | 03:02:24.236 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=6dc42444-4903-4524-a5ef-27412fb8d4ff] 2025-05-26 03:02:24.420198 | orchestrator | 03:02:24.419 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=8206a756-e71e-4efc-8782-1fa95da42dd8] 2025-05-26 03:02:24.458262 | orchestrator | 03:02:24.457 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=949929a6-29e4-45b2-9258-aa7109a063ee] 2025-05-26 03:02:24.477808 | orchestrator | 03:02:24.477 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=d81cd463-9f68-4db7-a5be-167414a57780] 2025-05-26 03:02:24.636735 | orchestrator | 03:02:24.636 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 7s [id=b0f36336-735b-4f2c-84c1-f7da79536853] 2025-05-26 03:02:25.699687 | orchestrator | 03:02:25.699 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=ee8d31c0-a683-42a0-9917-67f50f1a8425] 2025-05-26 03:02:25.712324 | orchestrator | 03:02:25.712 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-26 03:02:25.733881 | orchestrator | 03:02:25.733 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-26 03:02:25.743116 | orchestrator | 03:02:25.742 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-26 03:02:25.747507 | orchestrator | 03:02:25.747 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-26 03:02:25.747708 | orchestrator | 03:02:25.747 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-26 03:02:25.748192 | orchestrator | 03:02:25.748 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-26 03:02:25.761351 | orchestrator | 03:02:25.761 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-26 03:02:32.047859 | orchestrator | 03:02:32.047 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=45244e85-64ae-4688-b7bf-e0186ef368e0] 2025-05-26 03:02:32.056262 | orchestrator | 03:02:32.055 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-26 03:02:32.065011 | orchestrator | 03:02:32.064 STDOUT terraform: local_file.inventory: Creating... 2025-05-26 03:02:32.065120 | orchestrator | 03:02:32.065 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-26 03:02:32.071187 | orchestrator | 03:02:32.070 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ed240747ce91cdf52f664c8cecd3e9af479dc6ae] 2025-05-26 03:02:32.073243 | orchestrator | 03:02:32.072 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=5bec058f56f8ab45c0677f303f1af199cae84fdb] 2025-05-26 03:02:32.746978 | orchestrator | 03:02:32.746 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=45244e85-64ae-4688-b7bf-e0186ef368e0] 2025-05-26 03:02:35.737095 | orchestrator | 03:02:35.736 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-26 03:02:35.745987 | orchestrator | 03:02:35.745 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-26 03:02:35.750458 | orchestrator | 03:02:35.750 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-26 03:02:35.751792 | orchestrator | 03:02:35.751 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-26 03:02:35.751903 | orchestrator | 03:02:35.751 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-26 03:02:35.762982 | orchestrator | 03:02:35.762 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-26 03:02:45.737190 | orchestrator | 03:02:45.736 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-26 03:02:45.746297 | orchestrator | 03:02:45.746 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-26 03:02:45.750766 | orchestrator | 03:02:45.750 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-26 03:02:45.751931 | orchestrator | 03:02:45.751 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-26 03:02:45.752011 | orchestrator | 03:02:45.751 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-26 03:02:45.763941 | orchestrator | 03:02:45.763 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-26 03:02:46.257287 | orchestrator | 03:02:46.256 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=409010e7-4d7f-471b-beaf-d3e0cdbaeb50] 2025-05-26 03:02:46.287837 | orchestrator | 03:02:46.287 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=35365e62-e701-48a6-8cb9-798b698f6725] 2025-05-26 03:02:46.400235 | orchestrator | 03:02:46.399 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=3a43d6ca-d062-4f8e-b6d8-840cd389e9a2] 2025-05-26 03:02:55.738456 | orchestrator | 03:02:55.738 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-26 03:02:55.751109 | orchestrator | 03:02:55.750 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-26 03:02:55.751877 | orchestrator | 03:02:55.751 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-26 03:02:56.120179 | orchestrator | 03:02:56.119 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=5f18dbe3-a25a-4d26-a293-96763cbeaf66] 2025-05-26 03:02:56.305878 | orchestrator | 03:02:56.305 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=e92c04bf-955b-4903-91cc-db377fbc9b57] 2025-05-26 03:02:56.339981 | orchestrator | 03:02:56.339 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=fbc9007d-2a7c-47d2-8217-4f98ea4f19cc] 2025-05-26 03:02:56.362824 | orchestrator | 03:02:56.362 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-26 03:02:56.363084 | orchestrator | 03:02:56.362 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-26 03:02:56.366427 | orchestrator | 03:02:56.366 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-26 03:02:56.370835 | orchestrator | 03:02:56.370 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-26 03:02:56.376316 | orchestrator | 03:02:56.376 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-26 03:02:56.377558 | orchestrator | 03:02:56.377 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2964847015497823916] 2025-05-26 03:02:56.378157 | orchestrator | 03:02:56.378 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-26 03:02:56.382332 | orchestrator | 03:02:56.381 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-26 03:02:56.382367 | orchestrator | 03:02:56.381 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-26 03:02:56.401975 | orchestrator | 03:02:56.401 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-26 03:02:56.402102 | orchestrator | 03:02:56.401 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-26 03:02:56.412407 | orchestrator | 03:02:56.412 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-26 03:03:01.674419 | orchestrator | 03:03:01.674 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=5f18dbe3-a25a-4d26-a293-96763cbeaf66/2a6da8ab-439b-4c92-86f2-b8912a630d10] 2025-05-26 03:03:01.707292 | orchestrator | 03:03:01.706 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=3a43d6ca-d062-4f8e-b6d8-840cd389e9a2/c087d35e-df49-49d8-817c-07623fd598fd] 2025-05-26 03:03:01.730638 | orchestrator | 03:03:01.730 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=35365e62-e701-48a6-8cb9-798b698f6725/21cb62ce-763a-41a7-95e4-caebeb5b0a4b] 2025-05-26 03:03:01.759550 | orchestrator | 03:03:01.758 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=5f18dbe3-a25a-4d26-a293-96763cbeaf66/b8fa87d6-4bbf-4e23-9059-3efb42beefcf] 2025-05-26 03:03:01.771384 | orchestrator | 03:03:01.770 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=3a43d6ca-d062-4f8e-b6d8-840cd389e9a2/d6e6216b-cbe0-4182-a9d6-b0841cd13c95] 2025-05-26 03:03:01.779384 | orchestrator | 03:03:01.779 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=5f18dbe3-a25a-4d26-a293-96763cbeaf66/a2c1486d-cd17-4e79-bfde-447100a0feef] 2025-05-26 03:03:01.789936 | orchestrator | 03:03:01.789 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=35365e62-e701-48a6-8cb9-798b698f6725/ae6d7dd5-5925-42d7-939c-6a68dbf2df83] 2025-05-26 03:03:01.794105 | orchestrator | 03:03:01.793 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=3a43d6ca-d062-4f8e-b6d8-840cd389e9a2/feee3a86-288f-4310-9e74-72f077da2d2c] 2025-05-26 03:03:01.808912 | orchestrator | 03:03:01.808 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=35365e62-e701-48a6-8cb9-798b698f6725/8267a69f-7007-4a62-b03d-616d3aa09f53] 2025-05-26 03:03:06.413361 | orchestrator | 03:03:06.413 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-26 03:03:16.418273 | orchestrator | 03:03:16.417 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-26 03:03:16.932048 | orchestrator | 03:03:16.931 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=3ac1670f-dcef-41c0-bd98-d9ade477ef4f] 2025-05-26 03:03:16.957894 | orchestrator | 03:03:16.957 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-26 03:03:16.957999 | orchestrator | 03:03:16.957 STDOUT terraform: Outputs: 2025-05-26 03:03:16.958097 | orchestrator | 03:03:16.957 STDOUT terraform: manager_address = 2025-05-26 03:03:16.958116 | orchestrator | 03:03:16.957 STDOUT terraform: private_key = 2025-05-26 03:03:17.121466 | orchestrator | ok: Runtime: 0:01:34.593845 2025-05-26 03:03:17.161132 | 2025-05-26 03:03:17.161278 | TASK [Create infrastructure (stable)] 2025-05-26 03:03:17.731346 | orchestrator | skipping: Conditional result was False 2025-05-26 03:03:17.740682 | 2025-05-26 03:03:17.740814 | TASK [Fetch manager address] 2025-05-26 03:03:18.294713 | orchestrator | ok 2025-05-26 03:03:18.310048 | 2025-05-26 03:03:18.310731 | TASK [Set manager_host address] 2025-05-26 03:03:18.439235 | orchestrator | ok 2025-05-26 03:03:18.455892 | 2025-05-26 03:03:18.456043 | LOOP [Update ansible collections] 2025-05-26 03:03:22.239782 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-26 03:03:22.240082 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-26 03:03:22.240121 | orchestrator | Starting galaxy collection install process 2025-05-26 03:03:22.240146 | orchestrator | Process install dependency map 2025-05-26 03:03:22.240168 | orchestrator | Starting collection install process 2025-05-26 03:03:22.240188 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-05-26 03:03:22.240211 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-05-26 03:03:22.240235 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-26 03:03:22.240290 | orchestrator | ok: Item: commons Runtime: 0:00:03.298062 2025-05-26 03:03:23.822555 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-26 03:03:23.822676 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-26 03:03:23.822707 | orchestrator | Starting galaxy collection install process 2025-05-26 03:03:23.822731 | orchestrator | Process install dependency map 2025-05-26 03:03:23.822753 | orchestrator | Starting collection install process 2025-05-26 03:03:23.822774 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-05-26 03:03:23.822795 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-05-26 03:03:23.822814 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-26 03:03:23.822911 | orchestrator | ok: Item: services Runtime: 0:00:01.305262 2025-05-26 03:03:23.841059 | 2025-05-26 03:03:23.841216 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-26 03:03:34.400495 | orchestrator | ok 2025-05-26 03:03:34.411685 | 2025-05-26 03:03:34.411849 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-26 03:04:34.456802 | orchestrator | ok 2025-05-26 03:04:34.463715 | 2025-05-26 03:04:34.463799 | TASK [Fetch manager ssh hostkey] 2025-05-26 03:04:36.221091 | orchestrator | Output suppressed because no_log was given 2025-05-26 03:04:36.242327 | 2025-05-26 03:04:36.242460 | TASK [Get ssh keypair from terraform environment] 2025-05-26 03:04:36.806665 | orchestrator | ok: Runtime: 0:00:00.008532 2025-05-26 03:04:36.814056 | 2025-05-26 03:04:36.814153 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-26 03:04:36.862793 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-26 03:04:36.869537 | 2025-05-26 03:04:36.869642 | TASK [Run manager part 0] 2025-05-26 03:04:38.437708 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-26 03:04:38.588761 | orchestrator | 2025-05-26 03:04:38.588814 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-26 03:04:38.588824 | orchestrator | 2025-05-26 03:04:38.588843 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-26 03:04:40.419168 | orchestrator | ok: [testbed-manager] 2025-05-26 03:04:40.419211 | orchestrator | 2025-05-26 03:04:40.419232 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-26 03:04:40.419243 | orchestrator | 2025-05-26 03:04:40.419256 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:04:42.367328 | orchestrator | ok: [testbed-manager] 2025-05-26 03:04:42.367384 | orchestrator | 2025-05-26 03:04:42.367391 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-26 03:04:43.072227 | orchestrator | ok: [testbed-manager] 2025-05-26 03:04:43.072278 | orchestrator | 2025-05-26 03:04:43.072290 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-26 03:04:43.124645 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:04:43.124688 | orchestrator | 2025-05-26 03:04:43.124701 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-26 03:04:43.158616 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:04:43.158659 | orchestrator | 2025-05-26 03:04:43.158670 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-26 03:04:43.192879 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:04:43.192930 | orchestrator | 2025-05-26 03:04:43.192938 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-26 03:04:43.227019 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:04:43.227061 | orchestrator | 2025-05-26 03:04:43.227069 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-26 03:04:43.254765 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:04:43.254803 | orchestrator | 2025-05-26 03:04:43.254812 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-26 03:04:43.282468 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:04:43.282506 | orchestrator | 2025-05-26 03:04:43.282515 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-26 03:04:43.311260 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:04:43.311300 | orchestrator | 2025-05-26 03:04:43.311308 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-26 03:04:44.139870 | orchestrator | changed: [testbed-manager] 2025-05-26 03:04:44.140909 | orchestrator | 2025-05-26 03:04:44.140938 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-26 03:07:54.029739 | orchestrator | changed: [testbed-manager] 2025-05-26 03:07:54.029816 | orchestrator | 2025-05-26 03:07:54.029834 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-26 03:09:09.540912 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:09.540990 | orchestrator | 2025-05-26 03:09:09.541006 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-26 03:09:29.484650 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:29.484758 | orchestrator | 2025-05-26 03:09:29.484777 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-26 03:09:38.055783 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:38.055880 | orchestrator | 2025-05-26 03:09:38.055897 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-26 03:09:38.101617 | orchestrator | ok: [testbed-manager] 2025-05-26 03:09:38.101675 | orchestrator | 2025-05-26 03:09:38.101683 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-26 03:09:38.882365 | orchestrator | ok: [testbed-manager] 2025-05-26 03:09:38.882445 | orchestrator | 2025-05-26 03:09:38.882461 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-26 03:09:39.673598 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:39.673675 | orchestrator | 2025-05-26 03:09:39.673689 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-26 03:09:46.265846 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:46.265943 | orchestrator | 2025-05-26 03:09:46.265983 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-26 03:09:52.203416 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:52.203512 | orchestrator | 2025-05-26 03:09:52.203531 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-26 03:09:54.846709 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:54.846811 | orchestrator | 2025-05-26 03:09:54.846827 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-26 03:09:56.650312 | orchestrator | changed: [testbed-manager] 2025-05-26 03:09:56.650402 | orchestrator | 2025-05-26 03:09:56.650418 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-26 03:09:57.833250 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-26 03:09:57.833338 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-26 03:09:57.833352 | orchestrator | 2025-05-26 03:09:57.833365 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-26 03:09:57.875282 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-26 03:09:57.875367 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-26 03:09:57.875381 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-26 03:09:57.875393 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-26 03:10:03.195040 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-26 03:10:03.195144 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-26 03:10:03.195160 | orchestrator | 2025-05-26 03:10:03.195173 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-26 03:10:03.746850 | orchestrator | changed: [testbed-manager] 2025-05-26 03:10:03.746939 | orchestrator | 2025-05-26 03:10:03.746955 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-26 03:14:26.873546 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-26 03:14:26.873654 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-26 03:14:26.873671 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-26 03:14:26.873684 | orchestrator | 2025-05-26 03:14:26.873697 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-26 03:14:29.230786 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-26 03:14:29.230891 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-26 03:14:29.230918 | orchestrator | 2025-05-26 03:14:29.230931 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-26 03:14:29.230943 | orchestrator | 2025-05-26 03:14:29.230955 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:14:30.614060 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:30.614153 | orchestrator | 2025-05-26 03:14:30.614170 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-26 03:14:30.665943 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:30.666059 | orchestrator | 2025-05-26 03:14:30.666077 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-26 03:14:30.740091 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:30.740166 | orchestrator | 2025-05-26 03:14:30.740180 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-26 03:14:31.484299 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:31.484348 | orchestrator | 2025-05-26 03:14:31.484357 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-26 03:14:32.201459 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:32.201574 | orchestrator | 2025-05-26 03:14:32.201591 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-26 03:14:33.567945 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-26 03:14:33.568040 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-26 03:14:33.568056 | orchestrator | 2025-05-26 03:14:33.568086 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-26 03:14:34.948970 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:34.949075 | orchestrator | 2025-05-26 03:14:34.949093 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-26 03:14:36.728571 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-26 03:14:36.728663 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-26 03:14:36.728679 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-26 03:14:36.728692 | orchestrator | 2025-05-26 03:14:36.728703 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-26 03:14:37.310159 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:37.310247 | orchestrator | 2025-05-26 03:14:37.310266 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-26 03:14:37.377625 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:14:37.377738 | orchestrator | 2025-05-26 03:14:37.377762 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-26 03:14:38.290888 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:14:38.290975 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:38.290991 | orchestrator | 2025-05-26 03:14:38.291003 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-26 03:14:38.331664 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:14:38.331753 | orchestrator | 2025-05-26 03:14:38.331768 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-26 03:14:38.372886 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:14:38.372941 | orchestrator | 2025-05-26 03:14:38.372954 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-26 03:14:38.405060 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:14:38.405093 | orchestrator | 2025-05-26 03:14:38.405105 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-26 03:14:38.451027 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:14:38.451062 | orchestrator | 2025-05-26 03:14:38.451074 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-26 03:14:39.207636 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:39.207705 | orchestrator | 2025-05-26 03:14:39.207717 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-26 03:14:39.207726 | orchestrator | 2025-05-26 03:14:39.207735 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:14:40.601128 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:40.601225 | orchestrator | 2025-05-26 03:14:40.601242 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-26 03:14:41.566885 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:41.566924 | orchestrator | 2025-05-26 03:14:41.566930 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:14:41.566935 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-26 03:14:41.566940 | orchestrator | 2025-05-26 03:14:41.852687 | orchestrator | ok: Runtime: 0:10:04.436724 2025-05-26 03:14:41.860096 | 2025-05-26 03:14:41.860188 | TASK [Point out that the log in on the manager is now possible] 2025-05-26 03:14:41.880621 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-26 03:14:41.888176 | 2025-05-26 03:14:41.888296 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-26 03:14:41.948329 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-26 03:14:41.954750 | 2025-05-26 03:14:41.954889 | TASK [Run manager part 1 + 2] 2025-05-26 03:14:42.846577 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-26 03:14:42.901863 | orchestrator | 2025-05-26 03:14:42.901917 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-26 03:14:42.901923 | orchestrator | 2025-05-26 03:14:42.901936 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:14:45.774223 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:45.774320 | orchestrator | 2025-05-26 03:14:45.774379 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-26 03:14:45.810548 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:14:45.810608 | orchestrator | 2025-05-26 03:14:45.810620 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-26 03:14:45.842595 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:45.842691 | orchestrator | 2025-05-26 03:14:45.842720 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-26 03:14:45.887172 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:45.887230 | orchestrator | 2025-05-26 03:14:45.887240 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-26 03:14:45.951634 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:45.951731 | orchestrator | 2025-05-26 03:14:45.951752 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-26 03:14:46.018980 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:46.019029 | orchestrator | 2025-05-26 03:14:46.019036 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-26 03:14:46.060089 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-26 03:14:46.060177 | orchestrator | 2025-05-26 03:14:46.060192 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-26 03:14:46.768894 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:46.769042 | orchestrator | 2025-05-26 03:14:46.769064 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-26 03:14:46.817834 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:14:46.817891 | orchestrator | 2025-05-26 03:14:46.817899 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-26 03:14:48.223192 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:48.223284 | orchestrator | 2025-05-26 03:14:48.223302 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-26 03:14:48.784296 | orchestrator | ok: [testbed-manager] 2025-05-26 03:14:48.784388 | orchestrator | 2025-05-26 03:14:48.784404 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-26 03:14:49.926556 | orchestrator | changed: [testbed-manager] 2025-05-26 03:14:49.926621 | orchestrator | 2025-05-26 03:14:49.926638 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-26 03:15:02.959598 | orchestrator | changed: [testbed-manager] 2025-05-26 03:15:02.959693 | orchestrator | 2025-05-26 03:15:02.959711 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-26 03:15:03.619499 | orchestrator | ok: [testbed-manager] 2025-05-26 03:15:03.619600 | orchestrator | 2025-05-26 03:15:03.619620 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-26 03:15:03.674246 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:15:03.674300 | orchestrator | 2025-05-26 03:15:03.674306 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-26 03:15:04.658654 | orchestrator | changed: [testbed-manager] 2025-05-26 03:15:04.658762 | orchestrator | 2025-05-26 03:15:04.658786 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-26 03:15:05.632083 | orchestrator | changed: [testbed-manager] 2025-05-26 03:15:05.632129 | orchestrator | 2025-05-26 03:15:05.632137 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-26 03:15:06.206667 | orchestrator | changed: [testbed-manager] 2025-05-26 03:15:06.206756 | orchestrator | 2025-05-26 03:15:06.206773 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-26 03:15:06.247570 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-26 03:15:06.247634 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-26 03:15:06.247641 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-26 03:15:06.247645 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-26 03:15:10.437881 | orchestrator | changed: [testbed-manager] 2025-05-26 03:15:10.437976 | orchestrator | 2025-05-26 03:15:10.437992 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-26 03:15:19.187613 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-26 03:15:19.187712 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-26 03:15:19.187731 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-26 03:15:19.187743 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-26 03:15:19.187762 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-26 03:15:19.187774 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-26 03:15:19.187785 | orchestrator | 2025-05-26 03:15:19.187797 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-26 03:15:20.244340 | orchestrator | changed: [testbed-manager] 2025-05-26 03:15:20.244457 | orchestrator | 2025-05-26 03:15:20.244480 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-26 03:15:20.285638 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:15:20.285715 | orchestrator | 2025-05-26 03:15:20.285730 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-26 03:15:23.317147 | orchestrator | changed: [testbed-manager] 2025-05-26 03:15:23.317239 | orchestrator | 2025-05-26 03:15:23.317254 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-26 03:15:23.359951 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:15:23.360060 | orchestrator | 2025-05-26 03:15:23.360090 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-26 03:16:56.823242 | orchestrator | changed: [testbed-manager] 2025-05-26 03:16:56.823368 | orchestrator | 2025-05-26 03:16:56.823389 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-26 03:16:57.947374 | orchestrator | ok: [testbed-manager] 2025-05-26 03:16:57.947466 | orchestrator | 2025-05-26 03:16:57.947482 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:16:57.947497 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-26 03:16:57.947508 | orchestrator | 2025-05-26 03:16:58.139621 | orchestrator | ok: Runtime: 0:02:15.749852 2025-05-26 03:16:58.148929 | 2025-05-26 03:16:58.149193 | TASK [Reboot manager] 2025-05-26 03:16:59.781159 | orchestrator | ok: Runtime: 0:00:00.951931 2025-05-26 03:16:59.808046 | 2025-05-26 03:16:59.808640 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-26 03:17:15.081995 | orchestrator | ok 2025-05-26 03:17:15.092936 | 2025-05-26 03:17:15.093080 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-26 03:18:15.146880 | orchestrator | ok 2025-05-26 03:18:15.160674 | 2025-05-26 03:18:15.160807 | TASK [Deploy manager + bootstrap nodes] 2025-05-26 03:18:17.698974 | orchestrator | 2025-05-26 03:18:17.699244 | orchestrator | # DEPLOY MANAGER 2025-05-26 03:18:17.699305 | orchestrator | 2025-05-26 03:18:17.699320 | orchestrator | + set -e 2025-05-26 03:18:17.699333 | orchestrator | + echo 2025-05-26 03:18:17.699348 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-26 03:18:17.699365 | orchestrator | + echo 2025-05-26 03:18:17.699416 | orchestrator | + cat /opt/manager-vars.sh 2025-05-26 03:18:17.702864 | orchestrator | export NUMBER_OF_NODES=6 2025-05-26 03:18:17.702893 | orchestrator | 2025-05-26 03:18:17.702907 | orchestrator | export CEPH_VERSION=reef 2025-05-26 03:18:17.702921 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-26 03:18:17.702933 | orchestrator | export MANAGER_VERSION=latest 2025-05-26 03:18:17.702955 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-26 03:18:17.702967 | orchestrator | 2025-05-26 03:18:17.703011 | orchestrator | export ARA=false 2025-05-26 03:18:17.703025 | orchestrator | export TEMPEST=true 2025-05-26 03:18:17.703042 | orchestrator | export IS_ZUUL=true 2025-05-26 03:18:17.703053 | orchestrator | 2025-05-26 03:18:17.703071 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-05-26 03:18:17.703083 | orchestrator | export EXTERNAL_API=false 2025-05-26 03:18:17.703094 | orchestrator | 2025-05-26 03:18:17.703117 | orchestrator | export IMAGE_USER=ubuntu 2025-05-26 03:18:17.703128 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-26 03:18:17.703139 | orchestrator | 2025-05-26 03:18:17.703152 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-26 03:18:17.703170 | orchestrator | 2025-05-26 03:18:17.703181 | orchestrator | + echo 2025-05-26 03:18:17.703193 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-26 03:18:17.703999 | orchestrator | ++ export INTERACTIVE=false 2025-05-26 03:18:17.704020 | orchestrator | ++ INTERACTIVE=false 2025-05-26 03:18:17.704032 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-26 03:18:17.704044 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-26 03:18:17.704441 | orchestrator | + source /opt/manager-vars.sh 2025-05-26 03:18:17.704461 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-26 03:18:17.704496 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-26 03:18:17.704507 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-26 03:18:17.704519 | orchestrator | ++ CEPH_VERSION=reef 2025-05-26 03:18:17.704531 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-26 03:18:17.704541 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-26 03:18:17.704575 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-26 03:18:17.704586 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-26 03:18:17.704597 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-26 03:18:17.704609 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-26 03:18:17.704619 | orchestrator | ++ export ARA=false 2025-05-26 03:18:17.704639 | orchestrator | ++ ARA=false 2025-05-26 03:18:17.704650 | orchestrator | ++ export TEMPEST=true 2025-05-26 03:18:17.704682 | orchestrator | ++ TEMPEST=true 2025-05-26 03:18:17.704698 | orchestrator | ++ export IS_ZUUL=true 2025-05-26 03:18:17.704709 | orchestrator | ++ IS_ZUUL=true 2025-05-26 03:18:17.704721 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-05-26 03:18:17.704732 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-05-26 03:18:17.704742 | orchestrator | ++ export EXTERNAL_API=false 2025-05-26 03:18:17.704753 | orchestrator | ++ EXTERNAL_API=false 2025-05-26 03:18:17.704764 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-26 03:18:17.704774 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-26 03:18:17.704785 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-26 03:18:17.704796 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-26 03:18:17.704807 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-26 03:18:17.704817 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-26 03:18:17.704829 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-26 03:18:17.762278 | orchestrator | + docker version 2025-05-26 03:18:18.032660 | orchestrator | Client: Docker Engine - Community 2025-05-26 03:18:18.032760 | orchestrator | Version: 27.5.1 2025-05-26 03:18:18.032777 | orchestrator | API version: 1.47 2025-05-26 03:18:18.032788 | orchestrator | Go version: go1.22.11 2025-05-26 03:18:18.032799 | orchestrator | Git commit: 9f9e405 2025-05-26 03:18:18.032812 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-26 03:18:18.032824 | orchestrator | OS/Arch: linux/amd64 2025-05-26 03:18:18.032834 | orchestrator | Context: default 2025-05-26 03:18:18.032845 | orchestrator | 2025-05-26 03:18:18.032857 | orchestrator | Server: Docker Engine - Community 2025-05-26 03:18:18.032868 | orchestrator | Engine: 2025-05-26 03:18:18.032879 | orchestrator | Version: 27.5.1 2025-05-26 03:18:18.032889 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-26 03:18:18.032900 | orchestrator | Go version: go1.22.11 2025-05-26 03:18:18.032911 | orchestrator | Git commit: 4c9b3b0 2025-05-26 03:18:18.032948 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-26 03:18:18.032959 | orchestrator | OS/Arch: linux/amd64 2025-05-26 03:18:18.032970 | orchestrator | Experimental: false 2025-05-26 03:18:18.032981 | orchestrator | containerd: 2025-05-26 03:18:18.032991 | orchestrator | Version: 1.7.27 2025-05-26 03:18:18.033002 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-26 03:18:18.033013 | orchestrator | runc: 2025-05-26 03:18:18.033024 | orchestrator | Version: 1.2.5 2025-05-26 03:18:18.033036 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-26 03:18:18.033047 | orchestrator | docker-init: 2025-05-26 03:18:18.033057 | orchestrator | Version: 0.19.0 2025-05-26 03:18:18.033071 | orchestrator | GitCommit: de40ad0 2025-05-26 03:18:18.037139 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-26 03:18:18.045766 | orchestrator | + set -e 2025-05-26 03:18:18.046704 | orchestrator | + source /opt/manager-vars.sh 2025-05-26 03:18:18.046725 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-26 03:18:18.046738 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-26 03:18:18.046750 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-26 03:18:18.046761 | orchestrator | ++ CEPH_VERSION=reef 2025-05-26 03:18:18.046773 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-26 03:18:18.046784 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-26 03:18:18.046812 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-26 03:18:18.046824 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-26 03:18:18.046835 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-26 03:18:18.046845 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-26 03:18:18.046856 | orchestrator | ++ export ARA=false 2025-05-26 03:18:18.046867 | orchestrator | ++ ARA=false 2025-05-26 03:18:18.046878 | orchestrator | ++ export TEMPEST=true 2025-05-26 03:18:18.046889 | orchestrator | ++ TEMPEST=true 2025-05-26 03:18:18.046899 | orchestrator | ++ export IS_ZUUL=true 2025-05-26 03:18:18.046910 | orchestrator | ++ IS_ZUUL=true 2025-05-26 03:18:18.046921 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-05-26 03:18:18.046932 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-05-26 03:18:18.046943 | orchestrator | ++ export EXTERNAL_API=false 2025-05-26 03:18:18.046954 | orchestrator | ++ EXTERNAL_API=false 2025-05-26 03:18:18.046964 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-26 03:18:18.046975 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-26 03:18:18.046986 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-26 03:18:18.046997 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-26 03:18:18.047007 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-26 03:18:18.047018 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-26 03:18:18.047029 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-26 03:18:18.047040 | orchestrator | ++ export INTERACTIVE=false 2025-05-26 03:18:18.047051 | orchestrator | ++ INTERACTIVE=false 2025-05-26 03:18:18.047061 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-26 03:18:18.047072 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-26 03:18:18.047082 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-26 03:18:18.047093 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-26 03:18:18.047104 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-26 03:18:18.052919 | orchestrator | + set -e 2025-05-26 03:18:18.052965 | orchestrator | + VERSION=reef 2025-05-26 03:18:18.053923 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-26 03:18:18.060677 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-26 03:18:18.060704 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-26 03:18:18.066834 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-26 03:18:18.073339 | orchestrator | + set -e 2025-05-26 03:18:18.073369 | orchestrator | + VERSION=2024.2 2025-05-26 03:18:18.074417 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-26 03:18:18.077360 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-26 03:18:18.077402 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-26 03:18:18.083148 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-26 03:18:18.083513 | orchestrator | ++ semver latest 7.0.0 2025-05-26 03:18:18.138584 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-26 03:18:18.138661 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-26 03:18:18.138675 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-26 03:18:18.138686 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-26 03:18:18.172804 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-26 03:18:18.175633 | orchestrator | + source /opt/venv/bin/activate 2025-05-26 03:18:18.176821 | orchestrator | ++ deactivate nondestructive 2025-05-26 03:18:18.176859 | orchestrator | ++ '[' -n '' ']' 2025-05-26 03:18:18.176871 | orchestrator | ++ '[' -n '' ']' 2025-05-26 03:18:18.176882 | orchestrator | ++ hash -r 2025-05-26 03:18:18.176893 | orchestrator | ++ '[' -n '' ']' 2025-05-26 03:18:18.176909 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-26 03:18:18.176920 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-26 03:18:18.176931 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-26 03:18:18.177124 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-26 03:18:18.177141 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-26 03:18:18.177156 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-26 03:18:18.177168 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-26 03:18:18.177184 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-26 03:18:18.177402 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-26 03:18:18.177421 | orchestrator | ++ export PATH 2025-05-26 03:18:18.177437 | orchestrator | ++ '[' -n '' ']' 2025-05-26 03:18:18.177448 | orchestrator | ++ '[' -z '' ']' 2025-05-26 03:18:18.177459 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-26 03:18:18.177473 | orchestrator | ++ PS1='(venv) ' 2025-05-26 03:18:18.177513 | orchestrator | ++ export PS1 2025-05-26 03:18:18.177524 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-26 03:18:18.177611 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-26 03:18:18.177625 | orchestrator | ++ hash -r 2025-05-26 03:18:18.177864 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-26 03:18:19.377700 | orchestrator | 2025-05-26 03:18:19.377818 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-26 03:18:19.377836 | orchestrator | 2025-05-26 03:18:19.377870 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-26 03:18:19.922585 | orchestrator | ok: [testbed-manager] 2025-05-26 03:18:19.922708 | orchestrator | 2025-05-26 03:18:19.922728 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-26 03:18:20.915245 | orchestrator | changed: [testbed-manager] 2025-05-26 03:18:20.915389 | orchestrator | 2025-05-26 03:18:20.915406 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-26 03:18:20.915419 | orchestrator | 2025-05-26 03:18:20.915431 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:18:23.235977 | orchestrator | ok: [testbed-manager] 2025-05-26 03:18:23.236098 | orchestrator | 2025-05-26 03:18:23.236115 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-26 03:18:27.798725 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-26 03:18:27.798842 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-26 03:18:27.798857 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:reef) 2025-05-26 03:18:27.798873 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-05-26 03:18:27.798885 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.2) 2025-05-26 03:18:27.798896 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.3-alpine) 2025-05-26 03:18:27.798908 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-26 03:18:27.798919 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-05-26 03:18:27.798930 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-05-26 03:18:27.798941 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-26 03:18:27.798952 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.0) 2025-05-26 03:18:27.798962 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.3) 2025-05-26 03:18:27.798973 | orchestrator | 2025-05-26 03:18:27.799018 | orchestrator | TASK [Check status] ************************************************************ 2025-05-26 03:19:43.956626 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-26 03:19:43.956760 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-26 03:19:43.956776 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-26 03:19:43.956787 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-26 03:19:43.956797 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-05-26 03:19:43.956820 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j604789800026.1540', 'results_file': '/home/dragon/.ansible_async/j604789800026.1540', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956840 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j44421096532.1565', 'results_file': '/home/dragon/.ansible_async/j44421096532.1565', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956855 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-26 03:19:43.956866 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j345879979639.1590', 'results_file': '/home/dragon/.ansible_async/j345879979639.1590', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:reef', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956876 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j615384347509.1622', 'results_file': '/home/dragon/.ansible_async/j615384347509.1622', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956887 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j120153519863.1654', 'results_file': '/home/dragon/.ansible_async/j120153519863.1654', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.2', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956907 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j368124476404.1686', 'results_file': '/home/dragon/.ansible_async/j368124476404.1686', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.3-alpine', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956918 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-26 03:19:43.956928 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j875096615088.1718', 'results_file': '/home/dragon/.ansible_async/j875096615088.1718', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956942 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j52807562100.1751', 'results_file': '/home/dragon/.ansible_async/j52807562100.1751', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956952 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j229146972699.1783', 'results_file': '/home/dragon/.ansible_async/j229146972699.1783', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956962 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j992332297994.1815', 'results_file': '/home/dragon/.ansible_async/j992332297994.1815', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.956972 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j822960782938.1848', 'results_file': '/home/dragon/.ansible_async/j822960782938.1848', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.0', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.957003 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j42790063089.1882', 'results_file': '/home/dragon/.ansible_async/j42790063089.1882', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.3', 'ansible_loop_var': 'item'}) 2025-05-26 03:19:43.957013 | orchestrator | 2025-05-26 03:19:43.957024 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-26 03:19:44.009448 | orchestrator | ok: [testbed-manager] 2025-05-26 03:19:44.009561 | orchestrator | 2025-05-26 03:19:44.009577 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-26 03:19:44.466871 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:44.466975 | orchestrator | 2025-05-26 03:19:44.466992 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-26 03:19:44.827486 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:44.827595 | orchestrator | 2025-05-26 03:19:44.827611 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-26 03:19:45.177087 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:45.177239 | orchestrator | 2025-05-26 03:19:45.177257 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-26 03:19:45.238963 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:19:45.239066 | orchestrator | 2025-05-26 03:19:45.239081 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-26 03:19:45.562061 | orchestrator | ok: [testbed-manager] 2025-05-26 03:19:45.562164 | orchestrator | 2025-05-26 03:19:45.562222 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-26 03:19:45.664619 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:19:45.664717 | orchestrator | 2025-05-26 03:19:45.664732 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-26 03:19:45.664744 | orchestrator | 2025-05-26 03:19:45.664755 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:19:47.432134 | orchestrator | ok: [testbed-manager] 2025-05-26 03:19:47.432302 | orchestrator | 2025-05-26 03:19:47.432319 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-26 03:19:47.524125 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-26 03:19:47.524249 | orchestrator | 2025-05-26 03:19:47.524263 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-26 03:19:47.581130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-26 03:19:47.581246 | orchestrator | 2025-05-26 03:19:47.581261 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-26 03:19:48.707022 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-26 03:19:48.707133 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-26 03:19:48.707147 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-26 03:19:48.707164 | orchestrator | 2025-05-26 03:19:48.707177 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-26 03:19:50.493526 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-26 03:19:50.493644 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-26 03:19:50.493660 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-26 03:19:50.493674 | orchestrator | 2025-05-26 03:19:50.493687 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-26 03:19:51.125516 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:19:51.125623 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:51.125641 | orchestrator | 2025-05-26 03:19:51.125663 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-26 03:19:51.749906 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:19:51.750009 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:51.750104 | orchestrator | 2025-05-26 03:19:51.750145 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-26 03:19:51.802352 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:19:51.802413 | orchestrator | 2025-05-26 03:19:51.802426 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-26 03:19:52.168278 | orchestrator | ok: [testbed-manager] 2025-05-26 03:19:52.168363 | orchestrator | 2025-05-26 03:19:52.168377 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-26 03:19:52.233151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-26 03:19:52.233221 | orchestrator | 2025-05-26 03:19:52.233235 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-26 03:19:53.242778 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:53.242883 | orchestrator | 2025-05-26 03:19:53.242899 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-26 03:19:54.027479 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:54.027606 | orchestrator | 2025-05-26 03:19:54.027623 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-26 03:19:57.873145 | orchestrator | changed: [testbed-manager] 2025-05-26 03:19:57.873326 | orchestrator | 2025-05-26 03:19:57.873345 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-26 03:19:58.025505 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-26 03:19:58.025603 | orchestrator | 2025-05-26 03:19:58.025617 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-26 03:19:58.090544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-26 03:19:58.090583 | orchestrator | 2025-05-26 03:19:58.090595 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-26 03:20:00.859338 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:00.859455 | orchestrator | 2025-05-26 03:20:00.859473 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-26 03:20:00.977432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-26 03:20:00.977532 | orchestrator | 2025-05-26 03:20:00.977546 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-26 03:20:02.088402 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-26 03:20:02.088510 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-26 03:20:02.088526 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-26 03:20:02.088538 | orchestrator | 2025-05-26 03:20:02.088553 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-26 03:20:02.151896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-26 03:20:02.151961 | orchestrator | 2025-05-26 03:20:02.151975 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-26 03:20:02.832630 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-26 03:20:02.832731 | orchestrator | 2025-05-26 03:20:02.832746 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-26 03:20:03.484975 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:03.485083 | orchestrator | 2025-05-26 03:20:03.485098 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-26 03:20:04.127050 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:20:04.127156 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:04.127232 | orchestrator | 2025-05-26 03:20:04.127247 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-26 03:20:04.533765 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:04.533871 | orchestrator | 2025-05-26 03:20:04.533886 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-26 03:20:04.890521 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:04.890600 | orchestrator | 2025-05-26 03:20:04.890606 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-26 03:20:04.934108 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:04.934236 | orchestrator | 2025-05-26 03:20:04.934251 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-26 03:20:05.583433 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:05.583540 | orchestrator | 2025-05-26 03:20:05.583555 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-26 03:20:05.653926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-26 03:20:05.654063 | orchestrator | 2025-05-26 03:20:05.654079 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-26 03:20:06.389820 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-26 03:20:06.389974 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-26 03:20:06.390968 | orchestrator | 2025-05-26 03:20:06.391076 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-26 03:20:07.033977 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-26 03:20:07.034236 | orchestrator | 2025-05-26 03:20:07.034256 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-26 03:20:07.685839 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:07.685954 | orchestrator | 2025-05-26 03:20:07.685971 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-26 03:20:07.730892 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:07.730928 | orchestrator | 2025-05-26 03:20:07.730966 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-26 03:20:08.367160 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:08.367331 | orchestrator | 2025-05-26 03:20:08.367358 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-26 03:20:10.119954 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:20:10.120071 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:20:10.120086 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:20:10.120098 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:10.120112 | orchestrator | 2025-05-26 03:20:10.120124 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-26 03:20:15.937095 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-26 03:20:15.937278 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-26 03:20:15.937297 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-26 03:20:15.937311 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-26 03:20:15.937322 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-26 03:20:15.937334 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-26 03:20:15.937345 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-26 03:20:15.937356 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-26 03:20:15.937367 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-26 03:20:15.937378 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-26 03:20:15.937390 | orchestrator | 2025-05-26 03:20:15.937402 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-26 03:20:16.579104 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-26 03:20:16.579264 | orchestrator | 2025-05-26 03:20:16.579281 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-26 03:20:16.665495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-26 03:20:16.665583 | orchestrator | 2025-05-26 03:20:16.665596 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-26 03:20:17.363589 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:17.363671 | orchestrator | 2025-05-26 03:20:17.363679 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-26 03:20:17.975490 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:17.975633 | orchestrator | 2025-05-26 03:20:17.975650 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-26 03:20:18.690086 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:18.690243 | orchestrator | 2025-05-26 03:20:18.690261 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-26 03:20:21.071492 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:21.071608 | orchestrator | 2025-05-26 03:20:21.071625 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-26 03:20:22.071471 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:22.071582 | orchestrator | 2025-05-26 03:20:22.071598 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-26 03:20:44.110337 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-26 03:20:44.110458 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:44.110475 | orchestrator | 2025-05-26 03:20:44.110490 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-26 03:20:44.154662 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:44.154750 | orchestrator | 2025-05-26 03:20:44.154764 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-26 03:20:44.154776 | orchestrator | 2025-05-26 03:20:44.154789 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-26 03:20:44.198747 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:44.198824 | orchestrator | 2025-05-26 03:20:44.198837 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-26 03:20:44.249613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-26 03:20:44.249695 | orchestrator | 2025-05-26 03:20:44.249709 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-26 03:20:45.009459 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:45.009572 | orchestrator | 2025-05-26 03:20:45.009589 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-26 03:20:45.069399 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:45.069431 | orchestrator | 2025-05-26 03:20:45.069444 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-26 03:20:45.116004 | orchestrator | ok: [testbed-manager] => { 2025-05-26 03:20:45.116038 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-26 03:20:45.116050 | orchestrator | } 2025-05-26 03:20:45.116062 | orchestrator | 2025-05-26 03:20:45.116073 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-26 03:20:45.651854 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:45.651957 | orchestrator | 2025-05-26 03:20:45.651972 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-26 03:20:46.426571 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:46.426702 | orchestrator | 2025-05-26 03:20:46.426721 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-26 03:20:46.492531 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:46.492631 | orchestrator | 2025-05-26 03:20:46.492646 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-26 03:20:46.543258 | orchestrator | ok: [testbed-manager] => { 2025-05-26 03:20:46.543341 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-26 03:20:46.543356 | orchestrator | } 2025-05-26 03:20:46.543368 | orchestrator | 2025-05-26 03:20:46.543380 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-26 03:20:46.597235 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:46.597272 | orchestrator | 2025-05-26 03:20:46.597285 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-26 03:20:46.652492 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:46.652565 | orchestrator | 2025-05-26 03:20:46.652578 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-26 03:20:46.707595 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:46.707685 | orchestrator | 2025-05-26 03:20:46.707700 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-26 03:20:46.818280 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:46.818392 | orchestrator | 2025-05-26 03:20:46.818415 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-26 03:20:46.867708 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:46.867788 | orchestrator | 2025-05-26 03:20:46.867802 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-26 03:20:46.917188 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:20:46.917256 | orchestrator | 2025-05-26 03:20:46.917271 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-26 03:20:48.194368 | orchestrator | changed: [testbed-manager] 2025-05-26 03:20:48.194473 | orchestrator | 2025-05-26 03:20:48.194488 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-26 03:20:48.252244 | orchestrator | ok: [testbed-manager] 2025-05-26 03:20:48.252335 | orchestrator | 2025-05-26 03:20:48.252350 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-26 03:21:48.308696 | orchestrator | Pausing for 60 seconds 2025-05-26 03:21:48.308821 | orchestrator | changed: [testbed-manager] 2025-05-26 03:21:48.308837 | orchestrator | 2025-05-26 03:21:48.308850 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-26 03:21:48.359416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-26 03:21:48.359492 | orchestrator | 2025-05-26 03:21:48.359506 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-26 03:25:17.530337 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-26 03:25:17.530458 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-26 03:25:17.530481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-26 03:25:17.530495 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-26 03:25:17.530507 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-26 03:25:17.530518 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-26 03:25:17.530530 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-26 03:25:17.530541 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-26 03:25:17.530552 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-26 03:25:17.530563 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-26 03:25:17.530574 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-26 03:25:17.530585 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-26 03:25:17.530596 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-26 03:25:17.530607 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-26 03:25:17.530618 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-26 03:25:17.530629 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-26 03:25:17.530640 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-26 03:25:17.530651 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-26 03:25:17.530667 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-26 03:25:17.530685 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-26 03:25:17.530739 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:17.530755 | orchestrator | 2025-05-26 03:25:17.530767 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-26 03:25:17.530779 | orchestrator | 2025-05-26 03:25:17.530808 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:25:19.578136 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:19.578266 | orchestrator | 2025-05-26 03:25:19.578290 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-26 03:25:19.687247 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-26 03:25:19.687344 | orchestrator | 2025-05-26 03:25:19.687358 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-26 03:25:19.747208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-26 03:25:19.747292 | orchestrator | 2025-05-26 03:25:19.747307 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-26 03:25:21.623193 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:21.623303 | orchestrator | 2025-05-26 03:25:21.623320 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-26 03:25:21.686258 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:21.686362 | orchestrator | 2025-05-26 03:25:21.686377 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-26 03:25:21.779590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-26 03:25:21.779677 | orchestrator | 2025-05-26 03:25:21.779694 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-26 03:25:24.602565 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-26 03:25:24.602680 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-26 03:25:24.602695 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-26 03:25:24.602709 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-26 03:25:24.602720 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-26 03:25:24.602732 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-26 03:25:24.602743 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-26 03:25:24.602755 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-26 03:25:24.602771 | orchestrator | 2025-05-26 03:25:24.602785 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-05-26 03:25:25.249034 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:25.249147 | orchestrator | 2025-05-26 03:25:25.249163 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-26 03:25:25.878926 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:25.879086 | orchestrator | 2025-05-26 03:25:25.879104 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-26 03:25:25.956929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-26 03:25:25.956970 | orchestrator | 2025-05-26 03:25:25.957013 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-26 03:25:27.178939 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-26 03:25:27.179113 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-26 03:25:27.179131 | orchestrator | 2025-05-26 03:25:27.179144 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-26 03:25:27.829255 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:27.829354 | orchestrator | 2025-05-26 03:25:27.829367 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-26 03:25:27.886293 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:25:27.886385 | orchestrator | 2025-05-26 03:25:27.886399 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-26 03:25:27.963624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-26 03:25:27.963746 | orchestrator | 2025-05-26 03:25:27.963761 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-26 03:25:29.337413 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:25:29.337523 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:25:29.337539 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:29.337553 | orchestrator | 2025-05-26 03:25:29.337565 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-26 03:25:29.945233 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:29.945342 | orchestrator | 2025-05-26 03:25:29.945358 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-26 03:25:30.036179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-26 03:25:30.036267 | orchestrator | 2025-05-26 03:25:30.036284 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-26 03:25:31.212743 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:25:31.212865 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:25:31.212885 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:31.212898 | orchestrator | 2025-05-26 03:25:31.212911 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-26 03:25:31.851625 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:31.851732 | orchestrator | 2025-05-26 03:25:31.851747 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-26 03:25:32.476604 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:32.476709 | orchestrator | 2025-05-26 03:25:32.476725 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-26 03:25:32.623609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-26 03:25:32.623722 | orchestrator | 2025-05-26 03:25:32.623739 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-26 03:25:33.154965 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:33.155135 | orchestrator | 2025-05-26 03:25:33.155152 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-26 03:25:33.580646 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:33.580751 | orchestrator | 2025-05-26 03:25:33.580767 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-26 03:25:34.903715 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-26 03:25:34.903790 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-26 03:25:34.903795 | orchestrator | 2025-05-26 03:25:34.903800 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-26 03:25:35.581804 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:35.581945 | orchestrator | 2025-05-26 03:25:35.581964 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-26 03:25:35.996872 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:35.996964 | orchestrator | 2025-05-26 03:25:35.997013 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-26 03:25:36.386746 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:36.386854 | orchestrator | 2025-05-26 03:25:36.386870 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-26 03:25:36.443884 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:25:36.444023 | orchestrator | 2025-05-26 03:25:36.444048 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-26 03:25:36.532091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-26 03:25:36.532180 | orchestrator | 2025-05-26 03:25:36.532196 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-26 03:25:36.589303 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:36.589388 | orchestrator | 2025-05-26 03:25:36.589402 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-26 03:25:38.704827 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-26 03:25:38.705004 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-26 03:25:38.705024 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-26 03:25:38.705037 | orchestrator | 2025-05-26 03:25:38.705050 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-26 03:25:39.426208 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:39.426317 | orchestrator | 2025-05-26 03:25:39.426334 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-26 03:25:40.161727 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:40.161828 | orchestrator | 2025-05-26 03:25:40.161845 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-26 03:25:40.890069 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:40.890171 | orchestrator | 2025-05-26 03:25:40.890188 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-26 03:25:40.974474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-26 03:25:40.974545 | orchestrator | 2025-05-26 03:25:40.974561 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-26 03:25:41.031211 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:41.031263 | orchestrator | 2025-05-26 03:25:41.031277 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-26 03:25:41.750692 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-26 03:25:41.750798 | orchestrator | 2025-05-26 03:25:41.750814 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-26 03:25:41.837870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-26 03:25:41.838004 | orchestrator | 2025-05-26 03:25:41.838068 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-26 03:25:42.554315 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:42.554417 | orchestrator | 2025-05-26 03:25:42.554432 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-26 03:25:43.175681 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:43.175772 | orchestrator | 2025-05-26 03:25:43.175782 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-26 03:25:43.228505 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:25:43.228585 | orchestrator | 2025-05-26 03:25:43.228617 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-26 03:25:43.289357 | orchestrator | ok: [testbed-manager] 2025-05-26 03:25:43.289448 | orchestrator | 2025-05-26 03:25:43.289463 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-26 03:25:44.102455 | orchestrator | changed: [testbed-manager] 2025-05-26 03:25:44.102555 | orchestrator | 2025-05-26 03:25:44.102569 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-26 03:26:28.752765 | orchestrator | changed: [testbed-manager] 2025-05-26 03:26:28.752904 | orchestrator | 2025-05-26 03:26:28.752929 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-26 03:26:29.426573 | orchestrator | ok: [testbed-manager] 2025-05-26 03:26:29.426690 | orchestrator | 2025-05-26 03:26:29.426717 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-26 03:26:32.305867 | orchestrator | changed: [testbed-manager] 2025-05-26 03:26:32.306078 | orchestrator | 2025-05-26 03:26:32.306111 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-26 03:26:32.363225 | orchestrator | ok: [testbed-manager] 2025-05-26 03:26:32.363293 | orchestrator | 2025-05-26 03:26:32.363306 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-26 03:26:32.363319 | orchestrator | 2025-05-26 03:26:32.363330 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-26 03:26:32.430209 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:26:32.430282 | orchestrator | 2025-05-26 03:26:32.430296 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-26 03:27:32.475597 | orchestrator | Pausing for 60 seconds 2025-05-26 03:27:32.475721 | orchestrator | changed: [testbed-manager] 2025-05-26 03:27:32.475767 | orchestrator | 2025-05-26 03:27:32.475781 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-26 03:27:36.407021 | orchestrator | changed: [testbed-manager] 2025-05-26 03:27:36.407134 | orchestrator | 2025-05-26 03:27:36.407149 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-26 03:28:18.002552 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-26 03:28:18.002677 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-26 03:28:18.002694 | orchestrator | changed: [testbed-manager] 2025-05-26 03:28:18.002708 | orchestrator | 2025-05-26 03:28:18.002720 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-26 03:28:26.679835 | orchestrator | changed: [testbed-manager] 2025-05-26 03:28:26.680000 | orchestrator | 2025-05-26 03:28:26.680018 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-26 03:28:26.775210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-26 03:28:26.775308 | orchestrator | 2025-05-26 03:28:26.775322 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-26 03:28:26.775334 | orchestrator | 2025-05-26 03:28:26.775346 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-26 03:28:26.830813 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:28:26.830942 | orchestrator | 2025-05-26 03:28:26.830970 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:28:26.830984 | orchestrator | testbed-manager : ok=111 changed=59 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-26 03:28:26.830996 | orchestrator | 2025-05-26 03:28:26.951838 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-26 03:28:26.951977 | orchestrator | + deactivate 2025-05-26 03:28:26.951995 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-26 03:28:26.952010 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-26 03:28:26.952021 | orchestrator | + export PATH 2025-05-26 03:28:26.952032 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-26 03:28:26.952044 | orchestrator | + '[' -n '' ']' 2025-05-26 03:28:26.952055 | orchestrator | + hash -r 2025-05-26 03:28:26.952066 | orchestrator | + '[' -n '' ']' 2025-05-26 03:28:26.952076 | orchestrator | + unset VIRTUAL_ENV 2025-05-26 03:28:26.952088 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-26 03:28:26.952100 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-26 03:28:26.952111 | orchestrator | + unset -f deactivate 2025-05-26 03:28:26.952123 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-26 03:28:26.958009 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-26 03:28:26.958147 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-26 03:28:26.958161 | orchestrator | + local max_attempts=60 2025-05-26 03:28:26.958173 | orchestrator | + local name=ceph-ansible 2025-05-26 03:28:26.958184 | orchestrator | + local attempt_num=1 2025-05-26 03:28:26.959028 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-26 03:28:27.000769 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-26 03:28:27.000819 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-26 03:28:27.000833 | orchestrator | + local max_attempts=60 2025-05-26 03:28:27.000845 | orchestrator | + local name=kolla-ansible 2025-05-26 03:28:27.000856 | orchestrator | + local attempt_num=1 2025-05-26 03:28:27.001350 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-26 03:28:27.031606 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-26 03:28:27.031641 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-26 03:28:27.031653 | orchestrator | + local max_attempts=60 2025-05-26 03:28:27.031665 | orchestrator | + local name=osism-ansible 2025-05-26 03:28:27.031675 | orchestrator | + local attempt_num=1 2025-05-26 03:28:27.032406 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-26 03:28:27.076621 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-26 03:28:27.076677 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-26 03:28:27.076721 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-26 03:28:27.806080 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-26 03:28:28.006429 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-26 03:28:28.006555 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006573 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006585 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-26 03:28:28.006598 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-26 03:28:28.006610 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006621 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006632 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006643 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-05-26 03:28:28.006654 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006665 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-26 03:28:28.006676 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006686 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006697 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-26 03:28:28.006708 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006719 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006730 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.006741 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-05-26 03:28:28.012669 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-26 03:28:28.178274 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-26 03:28:28.178360 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-26 03:28:28.178371 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-26 03:28:28.178381 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-26 03:28:28.178392 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-26 03:28:28.186411 | orchestrator | ++ semver latest 7.0.0 2025-05-26 03:28:28.233049 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-26 03:28:28.233120 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-26 03:28:28.233155 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-26 03:28:28.236591 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-26 03:28:29.965253 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:28:29.965358 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:28:29.965373 | orchestrator | Registering Redlock._release_script 2025-05-26 03:28:30.172367 | orchestrator | 2025-05-26 03:28:30 | INFO  | Task 3dc4afd4-0c97-428f-9a1e-63e286f57706 (resolvconf) was prepared for execution. 2025-05-26 03:28:30.172474 | orchestrator | 2025-05-26 03:28:30 | INFO  | It takes a moment until task 3dc4afd4-0c97-428f-9a1e-63e286f57706 (resolvconf) has been started and output is visible here. 2025-05-26 03:28:33.988057 | orchestrator | 2025-05-26 03:28:33.988177 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-26 03:28:33.988196 | orchestrator | 2025-05-26 03:28:33.989702 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:28:33.990718 | orchestrator | Monday 26 May 2025 03:28:33 +0000 (0:00:00.145) 0:00:00.145 ************ 2025-05-26 03:28:38.159013 | orchestrator | ok: [testbed-manager] 2025-05-26 03:28:38.159140 | orchestrator | 2025-05-26 03:28:38.160617 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-26 03:28:38.160658 | orchestrator | Monday 26 May 2025 03:28:38 +0000 (0:00:04.177) 0:00:04.323 ************ 2025-05-26 03:28:38.241510 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:28:38.242493 | orchestrator | 2025-05-26 03:28:38.244066 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-26 03:28:38.244716 | orchestrator | Monday 26 May 2025 03:28:38 +0000 (0:00:00.083) 0:00:04.406 ************ 2025-05-26 03:28:38.341174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-26 03:28:38.341568 | orchestrator | 2025-05-26 03:28:38.342525 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-26 03:28:38.343905 | orchestrator | Monday 26 May 2025 03:28:38 +0000 (0:00:00.099) 0:00:04.506 ************ 2025-05-26 03:28:38.429404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-26 03:28:38.429795 | orchestrator | 2025-05-26 03:28:38.431619 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-26 03:28:38.432657 | orchestrator | Monday 26 May 2025 03:28:38 +0000 (0:00:00.088) 0:00:04.594 ************ 2025-05-26 03:28:39.578621 | orchestrator | ok: [testbed-manager] 2025-05-26 03:28:39.579400 | orchestrator | 2025-05-26 03:28:39.580513 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-26 03:28:39.581228 | orchestrator | Monday 26 May 2025 03:28:39 +0000 (0:00:01.147) 0:00:05.742 ************ 2025-05-26 03:28:39.653459 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:28:39.653775 | orchestrator | 2025-05-26 03:28:39.654823 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-26 03:28:39.656603 | orchestrator | Monday 26 May 2025 03:28:39 +0000 (0:00:00.075) 0:00:05.818 ************ 2025-05-26 03:28:40.174450 | orchestrator | ok: [testbed-manager] 2025-05-26 03:28:40.175345 | orchestrator | 2025-05-26 03:28:40.175828 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-26 03:28:40.176649 | orchestrator | Monday 26 May 2025 03:28:40 +0000 (0:00:00.519) 0:00:06.338 ************ 2025-05-26 03:28:40.249236 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:28:40.249787 | orchestrator | 2025-05-26 03:28:40.250523 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-26 03:28:40.251270 | orchestrator | Monday 26 May 2025 03:28:40 +0000 (0:00:00.075) 0:00:06.413 ************ 2025-05-26 03:28:40.868870 | orchestrator | changed: [testbed-manager] 2025-05-26 03:28:40.869022 | orchestrator | 2025-05-26 03:28:40.869385 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-26 03:28:40.869765 | orchestrator | Monday 26 May 2025 03:28:40 +0000 (0:00:00.620) 0:00:07.034 ************ 2025-05-26 03:28:42.100055 | orchestrator | changed: [testbed-manager] 2025-05-26 03:28:42.101070 | orchestrator | 2025-05-26 03:28:42.101409 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-26 03:28:42.102432 | orchestrator | Monday 26 May 2025 03:28:42 +0000 (0:00:01.229) 0:00:08.263 ************ 2025-05-26 03:28:43.076231 | orchestrator | ok: [testbed-manager] 2025-05-26 03:28:43.078665 | orchestrator | 2025-05-26 03:28:43.081105 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-26 03:28:43.081145 | orchestrator | Monday 26 May 2025 03:28:43 +0000 (0:00:00.972) 0:00:09.236 ************ 2025-05-26 03:28:43.154787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-26 03:28:43.155714 | orchestrator | 2025-05-26 03:28:43.156101 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-26 03:28:43.156934 | orchestrator | Monday 26 May 2025 03:28:43 +0000 (0:00:00.081) 0:00:09.317 ************ 2025-05-26 03:28:44.338121 | orchestrator | changed: [testbed-manager] 2025-05-26 03:28:44.338434 | orchestrator | 2025-05-26 03:28:44.339750 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:28:44.340937 | orchestrator | 2025-05-26 03:28:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:28:44.340986 | orchestrator | 2025-05-26 03:28:44 | INFO  | Please wait and do not abort execution. 2025-05-26 03:28:44.342720 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-26 03:28:44.344016 | orchestrator | 2025-05-26 03:28:44.344688 | orchestrator | 2025-05-26 03:28:44.345431 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:28:44.346687 | orchestrator | Monday 26 May 2025 03:28:44 +0000 (0:00:01.181) 0:00:10.498 ************ 2025-05-26 03:28:44.347424 | orchestrator | =============================================================================== 2025-05-26 03:28:44.348515 | orchestrator | Gathering Facts --------------------------------------------------------- 4.18s 2025-05-26 03:28:44.349380 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.23s 2025-05-26 03:28:44.350123 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2025-05-26 03:28:44.351083 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.15s 2025-05-26 03:28:44.351581 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-05-26 03:28:44.352664 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.62s 2025-05-26 03:28:44.353155 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2025-05-26 03:28:44.353680 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-26 03:28:44.354281 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-26 03:28:44.355470 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-05-26 03:28:44.355585 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-26 03:28:44.355970 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.08s 2025-05-26 03:28:44.356614 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-26 03:28:44.806369 | orchestrator | + osism apply sshconfig 2025-05-26 03:28:46.463212 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:28:46.463318 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:28:46.463333 | orchestrator | Registering Redlock._release_script 2025-05-26 03:28:46.520315 | orchestrator | 2025-05-26 03:28:46 | INFO  | Task 02678c2e-986c-43f0-8533-d09318d8d9a3 (sshconfig) was prepared for execution. 2025-05-26 03:28:46.520409 | orchestrator | 2025-05-26 03:28:46 | INFO  | It takes a moment until task 02678c2e-986c-43f0-8533-d09318d8d9a3 (sshconfig) has been started and output is visible here. 2025-05-26 03:28:50.438628 | orchestrator | 2025-05-26 03:28:50.439099 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-26 03:28:50.439127 | orchestrator | 2025-05-26 03:28:50.440266 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-26 03:28:50.441246 | orchestrator | Monday 26 May 2025 03:28:50 +0000 (0:00:00.164) 0:00:00.164 ************ 2025-05-26 03:28:50.989078 | orchestrator | ok: [testbed-manager] 2025-05-26 03:28:50.989791 | orchestrator | 2025-05-26 03:28:50.991319 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-26 03:28:50.991669 | orchestrator | Monday 26 May 2025 03:28:50 +0000 (0:00:00.552) 0:00:00.716 ************ 2025-05-26 03:28:51.465435 | orchestrator | changed: [testbed-manager] 2025-05-26 03:28:51.466465 | orchestrator | 2025-05-26 03:28:51.467407 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-26 03:28:51.468315 | orchestrator | Monday 26 May 2025 03:28:51 +0000 (0:00:00.475) 0:00:01.192 ************ 2025-05-26 03:28:57.163418 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-26 03:28:57.163541 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-26 03:28:57.163557 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-26 03:28:57.163569 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-26 03:28:57.163809 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-26 03:28:57.164650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-26 03:28:57.165372 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-26 03:28:57.165599 | orchestrator | 2025-05-26 03:28:57.166333 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-26 03:28:57.166730 | orchestrator | Monday 26 May 2025 03:28:57 +0000 (0:00:05.694) 0:00:06.886 ************ 2025-05-26 03:28:57.227325 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:28:57.228041 | orchestrator | 2025-05-26 03:28:57.229129 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-26 03:28:57.229918 | orchestrator | Monday 26 May 2025 03:28:57 +0000 (0:00:00.070) 0:00:06.957 ************ 2025-05-26 03:28:57.834815 | orchestrator | changed: [testbed-manager] 2025-05-26 03:28:57.835599 | orchestrator | 2025-05-26 03:28:57.836077 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:28:57.839473 | orchestrator | 2025-05-26 03:28:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:28:57.839660 | orchestrator | 2025-05-26 03:28:57 | INFO  | Please wait and do not abort execution. 2025-05-26 03:28:57.841034 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:28:57.841809 | orchestrator | 2025-05-26 03:28:57.842326 | orchestrator | 2025-05-26 03:28:57.843195 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:28:57.843515 | orchestrator | Monday 26 May 2025 03:28:57 +0000 (0:00:00.601) 0:00:07.559 ************ 2025-05-26 03:28:57.844283 | orchestrator | =============================================================================== 2025-05-26 03:28:57.845266 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.69s 2025-05-26 03:28:57.845668 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2025-05-26 03:28:57.846228 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-05-26 03:28:57.846693 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2025-05-26 03:28:57.847251 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-26 03:28:58.274440 | orchestrator | + osism apply known-hosts 2025-05-26 03:28:59.929106 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:28:59.929236 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:28:59.929255 | orchestrator | Registering Redlock._release_script 2025-05-26 03:28:59.986405 | orchestrator | 2025-05-26 03:28:59 | INFO  | Task 4215b50b-124a-475f-ae0b-239e66daafdb (known-hosts) was prepared for execution. 2025-05-26 03:28:59.986492 | orchestrator | 2025-05-26 03:28:59 | INFO  | It takes a moment until task 4215b50b-124a-475f-ae0b-239e66daafdb (known-hosts) has been started and output is visible here. 2025-05-26 03:29:03.851228 | orchestrator | 2025-05-26 03:29:03.851346 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-26 03:29:03.852243 | orchestrator | 2025-05-26 03:29:03.854353 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-26 03:29:03.855313 | orchestrator | Monday 26 May 2025 03:29:03 +0000 (0:00:00.167) 0:00:00.167 ************ 2025-05-26 03:29:09.819674 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-26 03:29:09.820072 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-26 03:29:09.821695 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-26 03:29:09.822009 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-26 03:29:09.823736 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-26 03:29:09.824401 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-26 03:29:09.824767 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-26 03:29:09.825290 | orchestrator | 2025-05-26 03:29:09.826064 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-26 03:29:09.826303 | orchestrator | Monday 26 May 2025 03:29:09 +0000 (0:00:05.969) 0:00:06.136 ************ 2025-05-26 03:29:10.008206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-26 03:29:10.008343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-26 03:29:10.010186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-26 03:29:10.010210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-26 03:29:10.010431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-26 03:29:10.010942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-26 03:29:10.011546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-26 03:29:10.012030 | orchestrator | 2025-05-26 03:29:10.012420 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:10.012759 | orchestrator | Monday 26 May 2025 03:29:10 +0000 (0:00:00.189) 0:00:06.326 ************ 2025-05-26 03:29:11.178777 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImhck92IZxgVZMIr3ITcHX2ygnhhBI0X/6dyHdr/ldLPcsjOOScbf3mEAAwBxeR376DReSnvx/T+PGRPL8H1cI=) 2025-05-26 03:29:11.179794 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/27SqKKd/bqIrUqHFOBO+EwTSvSKqCjCXDwuiIJMo8/HX9NPi4Sb5aCtigQDm962qYcMInHM1VcwD3JME4jKxvsDZaGxqLX5RysOifWC2fs+MB23Op5KPboqbhwkJMt2Njihb/359X82fp8GU3cz+ZDNnsrC0lCHGeHDWcqnbpfq5WRZ1gq53YmyG/EszPjOFCb71jLm/5Ec1EJReVwcZuQr9T0tvnL1N63YlGnOPv/hWNw5RM629LEMuIf3ikQY1XpPHjOgbSoUc+om8MO7VDD09MEfBlXo1SpEC3GlrunhK7WKrTbghYJxeH/dwqmuq82iU6KNzpkOgklk2ATgHV+XqzxO8fWzfwLxGCLIaiiHikaxghY0yf8KqwCoGaTTxxRIq8e7wVrglXgk4rS+LPIBzloNq4J8ZSP2VOyKrvkPSB5MLpvw3CKwL2YM95MAw55axzSmJmAT8rphgVOLKmjW7m6c72IPKFKKnHupMqdDTcR3d4SBgCPMgZ32qPFk=) 2025-05-26 03:29:11.179832 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILKdeAZjDc5TyGw1ybPZFqf2ihWW/L/c1srT0yHYhSfq) 2025-05-26 03:29:11.180779 | orchestrator | 2025-05-26 03:29:11.181425 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:11.182119 | orchestrator | Monday 26 May 2025 03:29:11 +0000 (0:00:01.171) 0:00:07.497 ************ 2025-05-26 03:29:12.224669 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI7PtRBQ3rOR0pYVyqgItsbZzudEQvM/V2zHKNQgwoqZvZ46WqmTaMt6IJWxGD2wsnkoe+/otagNthv+djDyIHk=) 2025-05-26 03:29:12.225672 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD66i8GmEe/11j8i2KKdDxUpzbmaxyZop3l2hPuY7lPSwdeube5FR+whyFaslfUr9RCjprHwcBtSgrMgeMQquE0gT7viNOvIPjW6pzaQ7Vplq8U5h8hxmDcYk8FKro7OCq/kVPLAkCIkdMYZGww87S3zq6OG+qZjDpxadqVvA4lTtR4bdXBaNZDx0cY/HYX+jOfb+qZTcktUpHwDhRE1xNYGEWvPJ3f8o1aRrZDgSqViyMoWGurg4kuQBHCBR22b2PjKXXOqhnuXyQHtrp0Zs1LLGCNuzEdJuxqxuRHEK6RF4/3L4V9yCDNgAWkJ0EpeKrdzdA7onvoo5NZ9b7435T3HmGARvutkC3D+ZSrmTTNRFQhWtBlOkTeCpedwOY2GkUIashIOsV4zxIDxPu7Fth13TB1wHhvwLisbFO7s0kr59icnOz7eV99kVqDh8EnyAvYupUG7lBVS5g22yU+RNn+GqB1+cF/lLerHSqZXdbc8egCo9A+KJU1oGZ/h1WEhs=) 2025-05-26 03:29:12.226132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGb39QUDcgg5XxNybSew5EMBqfp9qDhIrBt54BsBvOcF) 2025-05-26 03:29:12.227639 | orchestrator | 2025-05-26 03:29:12.228579 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:12.229570 | orchestrator | Monday 26 May 2025 03:29:12 +0000 (0:00:01.045) 0:00:08.543 ************ 2025-05-26 03:29:13.271364 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe/ja9fIqjNGnxoDjOUGZk3okFrqtqOAyswMZIQ5ps2lZhHpw/mWy1TV6m8aa22RmhYleNc4KYwcmp3NchuKA3v3GP8OTIZgjXF4yySz9/U2qnV+I1EiTiZRO0zXNCsS7rizsC/jpuRFw4SFPsr0Hue82Y5ETogWPI0gd3xtbeuKxdQq1INzgOBYeKhdDsIwDtDVwDnlIz55XBzf1YQkF9k6t9AknIH56tGonLJw8vpMCiotdNZUTRnhRuX0c2JkbN5NmX2WyDr6t0M2gruib44mRjHnSqZBk9YIea3EsrjDTXPbRF4j5YtnC14wo8VKYU6iL7Sg8lqBIFvLn9Oi2JQK174WHBhr92+IfkizvxMsL/K+I2BXi7zb/6K8mW6Zb/EvFlVlSImLR7ZcNZdtjuJDqBRchRZr34tHvooicy/ormMvQxVeyXL7S9dFAuYkBcpLCEebZZK4ww2uXWiD6WTyCRC7N+WHK036xINgADnivjg15RgwNsYupDBkMbqfU=) 2025-05-26 03:29:13.272048 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFr7Y5nvLu9mDjXjZnkXmrnLH/EIi1clf96fQpLevXxEbXKVAF0f0CQAuB0nvHpS/cAoz+V2kjus6Ar5WUwoHN0=) 2025-05-26 03:29:13.272938 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMqnDbhOFKXtvbQcAycS2o4hLwXWSzmNqeBeoqSWPD5G) 2025-05-26 03:29:13.273969 | orchestrator | 2025-05-26 03:29:13.274861 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:13.277263 | orchestrator | Monday 26 May 2025 03:29:13 +0000 (0:00:01.045) 0:00:09.589 ************ 2025-05-26 03:29:14.353102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJuRqo5CEybX07necxelUPg3d9Jo5m4jCJskx/FzE2l2) 2025-05-26 03:29:14.353476 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpVhGn+qKdGCgX/LfsodG6EVE0KJ3ER0X9q3J9npySpwQYgErBmVRky4Tjc75uM9AoL4Zj4tDbIYadP7tKnJAmjngZj69IWKnTcPZK3eH+FcZ19joirPvvtwCz57WqH8whBXANidEe6+Km0bm3th1gUgwoSHJlO0Tju1pHnZl/+kagMFG7jDr8Plq3HX9IkHO4aia9VOuqxMO5HoLPcB3MkEJBWRDxnKOWAP7gr1M+2B9JxYPfrNQkd1W1Qh+3r82skubQM6898fZTdWUw0TbDGYqOYN/NSfUlIMC7nX/fwQjEKFRD3GbRNdKAV8/LdHCOh9e+OITPqEvQ5TtE6lqyRebFhojU2vMABSFD4mhzL3PV5G0fvDFkVUcCwDE5rVwSDudyDJukSccpf+RfTtAIn89Y3teAg5hnIHbypBL8ADPoGxg8o3zanAYHv/2wWWR5XrU+es5tr1ywMVCuTDoMjCMSxWfMCofalnFErKFzuQ5FUGvm46cq61MZGUm1vnM=) 2025-05-26 03:29:14.354598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKCdcmvdI5zd6VNu3AisTEZ7eaLf1gQl9xbXtvtT9N8ocGWWECRtLzqHASmMM9mITU+DHI1q/OThDEMi+Jl73G8=) 2025-05-26 03:29:14.355257 | orchestrator | 2025-05-26 03:29:14.356002 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:14.356503 | orchestrator | Monday 26 May 2025 03:29:14 +0000 (0:00:01.081) 0:00:10.670 ************ 2025-05-26 03:29:15.365430 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcFVYax2bvc/AHMQcIdn7GFY17mCezEPRGRgQyTNRYScl04aPxuhydXEcLKxJ5jYt5GsjNRN+iSbdWHIgx3pEI=) 2025-05-26 03:29:15.365737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBHOn1GkLe8Q/a4GNx0WTWIX3ldL88BaoqEZ62/ZhuK+) 2025-05-26 03:29:15.365982 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7mq4NuUcKhDYeCo7vsgtPPDRxGIEGIXZqJxhmXlD5Sel2Po2i9bKK9hUjDzXGLbElXfq8M92KEor7/D5I5SLldOcTZ04twQifJD19sCsXHKPHLiKmr/tfqZw7mM8Z2DFfNIlbszWSi2abr+N2/bupFd+ozzsQ0jjBa85m4W/+MQftu8s+ji+Z/HD/UA1T5n0BXMvVk9rye76ftmR55wJKya5Yp5AifxttFRH28WZmM3S0cQH3+cdRxIeaDUg0FXjYtZRgwPErRzI4SzPqFtunhkTBNUrz7lmZtPZH7bbxk74JnLCLOMkMWknbPdBqBmaldLoe1L+guQN4CZI7irCrcMB+4xIDap2jREjw1owobiqCqje4m9gmA5jAd+l4lx692YozHSzuHYGhnXIYFMCawzgMXmm9GqbroCvuSJlg89aVK+ddo1AERQP3UbYhb6X+QuOlHqxHCiY6TRAWeUbDZjxVKd9kd4+PzslJXc8qzt0Xo3oFGY+YdEZXNMXiwpM=) 2025-05-26 03:29:15.367197 | orchestrator | 2025-05-26 03:29:15.367243 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:15.367255 | orchestrator | Monday 26 May 2025 03:29:15 +0000 (0:00:01.012) 0:00:11.683 ************ 2025-05-26 03:29:16.462799 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp4O2bbsYF4XAM+MR0x2eCfV6GjMKzOkon9VginN41OWLeeHCkyA1b/3nO5nlMa4kwsL5IEQVRTpoynfwwtPyOpkZdGPGsx9eNW3yA03VR3XrRspUJy8nG1aUQVGjkIb69VX0+dYYjUugsRFpgl34AmNjbTZOrxZZHAMZRSK7bq9N28XkYRaPQ/Sl9gAt/pUWhuMLiaHlaqUt+BiJwoBhN2gmtZOtA+CAFoHZ6CJ1JqPOQAcMkbtJM5A5izLx2oEuMB6KarPF2gk3y5yQjB8Wxbq4oId+a/DxWRo33w4PewvOsEIXxt6vXLaT8h8jnpuE8iLTG1rhng0PN4UeW+cr+JfBLiVNpc6IyqL5Ds5aVo/n7t20G/tf39KRnYHZTXnr+42GWJxbeQlExHYZu8PhHJFId3eVuBzv/sSUBHRcQW+qdJJsqLAPitB2a5Rv5OiXG1jE1dL12i/QFD23o1U8kKiMDX7SCqYSP3rjRMkJ6tXUHKzJmy8tlm4Sae8pNcFs=) 2025-05-26 03:29:16.463495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOiDS0vSoeikantoL3sQv0qHS/VHY3EzPfRax/A4xDoF1sk1uD3k2bCfB2SD03Yiuf2OEoiSW1lSmAEZFDUhGIo=) 2025-05-26 03:29:16.463942 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/JFaA+Gyd1nlGkOS3N1ADdYgFo6UTsQNEZROdyOOxF) 2025-05-26 03:29:16.465789 | orchestrator | 2025-05-26 03:29:16.466476 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:16.466599 | orchestrator | Monday 26 May 2025 03:29:16 +0000 (0:00:01.098) 0:00:12.782 ************ 2025-05-26 03:29:17.537564 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvUu1DQzd6KRLku+kmNicl2J3iMrQIaMSPZBn9s9hlQW4+eTJ69jsbloY/KwmfpfjVIVyppZnE1/ycCn1knGbwNIXA7OTvpM1iTivh70rDpFEhwxQGNt+qHBbwRPEwaOCz+PGg9LFf7/syyGjgMiTblNv1SSx3W+ADTat5qXMFFepemh81R79TUonRtA5/M4lY14CbxsBk9lJiPGI2PeHMDkP04fHN+65Tt2zrpuELSA13xEcXMVnaVA19D+Bi1ew5Y+FSBPKfUOyTznMQ4zBcVOpdlmw6jFcB09AWK9hnqHOhjR/nl4b5dq2HBYL5GQAGcxuD2BhzmFFtTPuqonmn3qvXcEcqOX02PZ7La8KcWVMRRdO6RL6pp9VOsTfbtmQNLjFQ5UoEqag6X1W9EKEhsM1llwwDk+Nl8QZQRUtIMhdIikvm7/vxR556qDqWU3hktXKPivHgszp9W0ViTSnuXhg17rulGycp2nFF3IumDPh5vUj4u4bQvmBJnxzT8Ck=) 2025-05-26 03:29:17.537642 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC81p7xTDfRF7EdON2yqFVtdUKH/qJvAAUEJQqFgWELeBwy3kbEXkr4uH/oOyGh/wlNalGD1deN9fpu6yCxfv54=) 2025-05-26 03:29:17.538240 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGLxMkVCoyTXCDCAOD/OGMuXAEXuATzCVKG43xDuXkzX) 2025-05-26 03:29:17.539303 | orchestrator | 2025-05-26 03:29:17.541040 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-26 03:29:17.542109 | orchestrator | Monday 26 May 2025 03:29:17 +0000 (0:00:01.073) 0:00:13.855 ************ 2025-05-26 03:29:22.907077 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-26 03:29:22.907296 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-26 03:29:22.908088 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-26 03:29:22.909245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-26 03:29:22.910101 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-26 03:29:22.911377 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-26 03:29:22.912243 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-26 03:29:22.912661 | orchestrator | 2025-05-26 03:29:22.913399 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-26 03:29:22.913808 | orchestrator | Monday 26 May 2025 03:29:22 +0000 (0:00:05.370) 0:00:19.225 ************ 2025-05-26 03:29:23.075753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-26 03:29:23.075858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-26 03:29:23.076595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-26 03:29:23.077957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-26 03:29:23.078396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-26 03:29:23.078544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-26 03:29:23.079213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-26 03:29:23.079546 | orchestrator | 2025-05-26 03:29:23.079811 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:23.080205 | orchestrator | Monday 26 May 2025 03:29:23 +0000 (0:00:00.169) 0:00:19.395 ************ 2025-05-26 03:29:24.106431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImhck92IZxgVZMIr3ITcHX2ygnhhBI0X/6dyHdr/ldLPcsjOOScbf3mEAAwBxeR376DReSnvx/T+PGRPL8H1cI=) 2025-05-26 03:29:24.107059 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/27SqKKd/bqIrUqHFOBO+EwTSvSKqCjCXDwuiIJMo8/HX9NPi4Sb5aCtigQDm962qYcMInHM1VcwD3JME4jKxvsDZaGxqLX5RysOifWC2fs+MB23Op5KPboqbhwkJMt2Njihb/359X82fp8GU3cz+ZDNnsrC0lCHGeHDWcqnbpfq5WRZ1gq53YmyG/EszPjOFCb71jLm/5Ec1EJReVwcZuQr9T0tvnL1N63YlGnOPv/hWNw5RM629LEMuIf3ikQY1XpPHjOgbSoUc+om8MO7VDD09MEfBlXo1SpEC3GlrunhK7WKrTbghYJxeH/dwqmuq82iU6KNzpkOgklk2ATgHV+XqzxO8fWzfwLxGCLIaiiHikaxghY0yf8KqwCoGaTTxxRIq8e7wVrglXgk4rS+LPIBzloNq4J8ZSP2VOyKrvkPSB5MLpvw3CKwL2YM95MAw55axzSmJmAT8rphgVOLKmjW7m6c72IPKFKKnHupMqdDTcR3d4SBgCPMgZ32qPFk=) 2025-05-26 03:29:24.107653 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILKdeAZjDc5TyGw1ybPZFqf2ihWW/L/c1srT0yHYhSfq) 2025-05-26 03:29:24.109175 | orchestrator | 2025-05-26 03:29:24.112544 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:24.113772 | orchestrator | Monday 26 May 2025 03:29:24 +0000 (0:00:01.029) 0:00:20.425 ************ 2025-05-26 03:29:25.141675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI7PtRBQ3rOR0pYVyqgItsbZzudEQvM/V2zHKNQgwoqZvZ46WqmTaMt6IJWxGD2wsnkoe+/otagNthv+djDyIHk=) 2025-05-26 03:29:25.144198 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDD66i8GmEe/11j8i2KKdDxUpzbmaxyZop3l2hPuY7lPSwdeube5FR+whyFaslfUr9RCjprHwcBtSgrMgeMQquE0gT7viNOvIPjW6pzaQ7Vplq8U5h8hxmDcYk8FKro7OCq/kVPLAkCIkdMYZGww87S3zq6OG+qZjDpxadqVvA4lTtR4bdXBaNZDx0cY/HYX+jOfb+qZTcktUpHwDhRE1xNYGEWvPJ3f8o1aRrZDgSqViyMoWGurg4kuQBHCBR22b2PjKXXOqhnuXyQHtrp0Zs1LLGCNuzEdJuxqxuRHEK6RF4/3L4V9yCDNgAWkJ0EpeKrdzdA7onvoo5NZ9b7435T3HmGARvutkC3D+ZSrmTTNRFQhWtBlOkTeCpedwOY2GkUIashIOsV4zxIDxPu7Fth13TB1wHhvwLisbFO7s0kr59icnOz7eV99kVqDh8EnyAvYupUG7lBVS5g22yU+RNn+GqB1+cF/lLerHSqZXdbc8egCo9A+KJU1oGZ/h1WEhs=) 2025-05-26 03:29:25.146112 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGb39QUDcgg5XxNybSew5EMBqfp9qDhIrBt54BsBvOcF) 2025-05-26 03:29:25.146154 | orchestrator | 2025-05-26 03:29:25.146164 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:25.146172 | orchestrator | Monday 26 May 2025 03:29:25 +0000 (0:00:01.034) 0:00:21.460 ************ 2025-05-26 03:29:26.198254 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe/ja9fIqjNGnxoDjOUGZk3okFrqtqOAyswMZIQ5ps2lZhHpw/mWy1TV6m8aa22RmhYleNc4KYwcmp3NchuKA3v3GP8OTIZgjXF4yySz9/U2qnV+I1EiTiZRO0zXNCsS7rizsC/jpuRFw4SFPsr0Hue82Y5ETogWPI0gd3xtbeuKxdQq1INzgOBYeKhdDsIwDtDVwDnlIz55XBzf1YQkF9k6t9AknIH56tGonLJw8vpMCiotdNZUTRnhRuX0c2JkbN5NmX2WyDr6t0M2gruib44mRjHnSqZBk9YIea3EsrjDTXPbRF4j5YtnC14wo8VKYU6iL7Sg8lqBIFvLn9Oi2JQK174WHBhr92+IfkizvxMsL/K+I2BXi7zb/6K8mW6Zb/EvFlVlSImLR7ZcNZdtjuJDqBRchRZr34tHvooicy/ormMvQxVeyXL7S9dFAuYkBcpLCEebZZK4ww2uXWiD6WTyCRC7N+WHK036xINgADnivjg15RgwNsYupDBkMbqfU=) 2025-05-26 03:29:26.198733 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFr7Y5nvLu9mDjXjZnkXmrnLH/EIi1clf96fQpLevXxEbXKVAF0f0CQAuB0nvHpS/cAoz+V2kjus6Ar5WUwoHN0=) 2025-05-26 03:29:26.199590 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMqnDbhOFKXtvbQcAycS2o4hLwXWSzmNqeBeoqSWPD5G) 2025-05-26 03:29:26.200272 | orchestrator | 2025-05-26 03:29:26.200978 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:26.201640 | orchestrator | Monday 26 May 2025 03:29:26 +0000 (0:00:01.056) 0:00:22.516 ************ 2025-05-26 03:29:27.270263 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKCdcmvdI5zd6VNu3AisTEZ7eaLf1gQl9xbXtvtT9N8ocGWWECRtLzqHASmMM9mITU+DHI1q/OThDEMi+Jl73G8=) 2025-05-26 03:29:27.270689 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpVhGn+qKdGCgX/LfsodG6EVE0KJ3ER0X9q3J9npySpwQYgErBmVRky4Tjc75uM9AoL4Zj4tDbIYadP7tKnJAmjngZj69IWKnTcPZK3eH+FcZ19joirPvvtwCz57WqH8whBXANidEe6+Km0bm3th1gUgwoSHJlO0Tju1pHnZl/+kagMFG7jDr8Plq3HX9IkHO4aia9VOuqxMO5HoLPcB3MkEJBWRDxnKOWAP7gr1M+2B9JxYPfrNQkd1W1Qh+3r82skubQM6898fZTdWUw0TbDGYqOYN/NSfUlIMC7nX/fwQjEKFRD3GbRNdKAV8/LdHCOh9e+OITPqEvQ5TtE6lqyRebFhojU2vMABSFD4mhzL3PV5G0fvDFkVUcCwDE5rVwSDudyDJukSccpf+RfTtAIn89Y3teAg5hnIHbypBL8ADPoGxg8o3zanAYHv/2wWWR5XrU+es5tr1ywMVCuTDoMjCMSxWfMCofalnFErKFzuQ5FUGvm46cq61MZGUm1vnM=) 2025-05-26 03:29:27.271581 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJuRqo5CEybX07necxelUPg3d9Jo5m4jCJskx/FzE2l2) 2025-05-26 03:29:27.272634 | orchestrator | 2025-05-26 03:29:27.273135 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:27.274299 | orchestrator | Monday 26 May 2025 03:29:27 +0000 (0:00:01.072) 0:00:23.588 ************ 2025-05-26 03:29:28.327278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7mq4NuUcKhDYeCo7vsgtPPDRxGIEGIXZqJxhmXlD5Sel2Po2i9bKK9hUjDzXGLbElXfq8M92KEor7/D5I5SLldOcTZ04twQifJD19sCsXHKPHLiKmr/tfqZw7mM8Z2DFfNIlbszWSi2abr+N2/bupFd+ozzsQ0jjBa85m4W/+MQftu8s+ji+Z/HD/UA1T5n0BXMvVk9rye76ftmR55wJKya5Yp5AifxttFRH28WZmM3S0cQH3+cdRxIeaDUg0FXjYtZRgwPErRzI4SzPqFtunhkTBNUrz7lmZtPZH7bbxk74JnLCLOMkMWknbPdBqBmaldLoe1L+guQN4CZI7irCrcMB+4xIDap2jREjw1owobiqCqje4m9gmA5jAd+l4lx692YozHSzuHYGhnXIYFMCawzgMXmm9GqbroCvuSJlg89aVK+ddo1AERQP3UbYhb6X+QuOlHqxHCiY6TRAWeUbDZjxVKd9kd4+PzslJXc8qzt0Xo3oFGY+YdEZXNMXiwpM=) 2025-05-26 03:29:28.327511 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcFVYax2bvc/AHMQcIdn7GFY17mCezEPRGRgQyTNRYScl04aPxuhydXEcLKxJ5jYt5GsjNRN+iSbdWHIgx3pEI=) 2025-05-26 03:29:28.328717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBHOn1GkLe8Q/a4GNx0WTWIX3ldL88BaoqEZ62/ZhuK+) 2025-05-26 03:29:28.329410 | orchestrator | 2025-05-26 03:29:28.331202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:28.331766 | orchestrator | Monday 26 May 2025 03:29:28 +0000 (0:00:01.057) 0:00:24.646 ************ 2025-05-26 03:29:29.383123 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ/JFaA+Gyd1nlGkOS3N1ADdYgFo6UTsQNEZROdyOOxF) 2025-05-26 03:29:29.385122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp4O2bbsYF4XAM+MR0x2eCfV6GjMKzOkon9VginN41OWLeeHCkyA1b/3nO5nlMa4kwsL5IEQVRTpoynfwwtPyOpkZdGPGsx9eNW3yA03VR3XrRspUJy8nG1aUQVGjkIb69VX0+dYYjUugsRFpgl34AmNjbTZOrxZZHAMZRSK7bq9N28XkYRaPQ/Sl9gAt/pUWhuMLiaHlaqUt+BiJwoBhN2gmtZOtA+CAFoHZ6CJ1JqPOQAcMkbtJM5A5izLx2oEuMB6KarPF2gk3y5yQjB8Wxbq4oId+a/DxWRo33w4PewvOsEIXxt6vXLaT8h8jnpuE8iLTG1rhng0PN4UeW+cr+JfBLiVNpc6IyqL5Ds5aVo/n7t20G/tf39KRnYHZTXnr+42GWJxbeQlExHYZu8PhHJFId3eVuBzv/sSUBHRcQW+qdJJsqLAPitB2a5Rv5OiXG1jE1dL12i/QFD23o1U8kKiMDX7SCqYSP3rjRMkJ6tXUHKzJmy8tlm4Sae8pNcFs=) 2025-05-26 03:29:29.387532 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOiDS0vSoeikantoL3sQv0qHS/VHY3EzPfRax/A4xDoF1sk1uD3k2bCfB2SD03Yiuf2OEoiSW1lSmAEZFDUhGIo=) 2025-05-26 03:29:29.388009 | orchestrator | 2025-05-26 03:29:29.388747 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-26 03:29:29.388851 | orchestrator | Monday 26 May 2025 03:29:29 +0000 (0:00:01.054) 0:00:25.700 ************ 2025-05-26 03:29:30.482515 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvUu1DQzd6KRLku+kmNicl2J3iMrQIaMSPZBn9s9hlQW4+eTJ69jsbloY/KwmfpfjVIVyppZnE1/ycCn1knGbwNIXA7OTvpM1iTivh70rDpFEhwxQGNt+qHBbwRPEwaOCz+PGg9LFf7/syyGjgMiTblNv1SSx3W+ADTat5qXMFFepemh81R79TUonRtA5/M4lY14CbxsBk9lJiPGI2PeHMDkP04fHN+65Tt2zrpuELSA13xEcXMVnaVA19D+Bi1ew5Y+FSBPKfUOyTznMQ4zBcVOpdlmw6jFcB09AWK9hnqHOhjR/nl4b5dq2HBYL5GQAGcxuD2BhzmFFtTPuqonmn3qvXcEcqOX02PZ7La8KcWVMRRdO6RL6pp9VOsTfbtmQNLjFQ5UoEqag6X1W9EKEhsM1llwwDk+Nl8QZQRUtIMhdIikvm7/vxR556qDqWU3hktXKPivHgszp9W0ViTSnuXhg17rulGycp2nFF3IumDPh5vUj4u4bQvmBJnxzT8Ck=) 2025-05-26 03:29:30.483229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC81p7xTDfRF7EdON2yqFVtdUKH/qJvAAUEJQqFgWELeBwy3kbEXkr4uH/oOyGh/wlNalGD1deN9fpu6yCxfv54=) 2025-05-26 03:29:30.484587 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGLxMkVCoyTXCDCAOD/OGMuXAEXuATzCVKG43xDuXkzX) 2025-05-26 03:29:30.485928 | orchestrator | 2025-05-26 03:29:30.488677 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-26 03:29:30.489259 | orchestrator | Monday 26 May 2025 03:29:30 +0000 (0:00:01.100) 0:00:26.800 ************ 2025-05-26 03:29:30.883017 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-26 03:29:30.883568 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-26 03:29:30.885883 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-26 03:29:30.885922 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-26 03:29:30.886683 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-26 03:29:30.887263 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-26 03:29:30.887831 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-26 03:29:30.888505 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:29:30.889263 | orchestrator | 2025-05-26 03:29:30.890099 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-26 03:29:30.890726 | orchestrator | Monday 26 May 2025 03:29:30 +0000 (0:00:00.401) 0:00:27.202 ************ 2025-05-26 03:29:30.953286 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:29:30.953838 | orchestrator | 2025-05-26 03:29:30.954164 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-26 03:29:30.956200 | orchestrator | Monday 26 May 2025 03:29:30 +0000 (0:00:00.071) 0:00:27.273 ************ 2025-05-26 03:29:31.017036 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:29:31.017851 | orchestrator | 2025-05-26 03:29:31.019454 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-26 03:29:31.019481 | orchestrator | Monday 26 May 2025 03:29:31 +0000 (0:00:00.063) 0:00:27.337 ************ 2025-05-26 03:29:31.537444 | orchestrator | changed: [testbed-manager] 2025-05-26 03:29:31.537651 | orchestrator | 2025-05-26 03:29:31.539290 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:29:31.539319 | orchestrator | 2025-05-26 03:29:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:29:31.539333 | orchestrator | 2025-05-26 03:29:31 | INFO  | Please wait and do not abort execution. 2025-05-26 03:29:31.540413 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-26 03:29:31.540982 | orchestrator | 2025-05-26 03:29:31.542123 | orchestrator | 2025-05-26 03:29:31.543032 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:29:31.544197 | orchestrator | Monday 26 May 2025 03:29:31 +0000 (0:00:00.519) 0:00:27.856 ************ 2025-05-26 03:29:31.544558 | orchestrator | =============================================================================== 2025-05-26 03:29:31.545197 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.97s 2025-05-26 03:29:31.545710 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.37s 2025-05-26 03:29:31.546419 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-05-26 03:29:31.547114 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-26 03:29:31.547652 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-26 03:29:31.548154 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-26 03:29:31.548634 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-26 03:29:31.549069 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-26 03:29:31.549539 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-26 03:29:31.549999 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-26 03:29:31.550774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-26 03:29:31.551194 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-26 03:29:31.551673 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-26 03:29:31.552389 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-26 03:29:31.552909 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-26 03:29:31.553481 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-26 03:29:31.553962 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-05-26 03:29:31.554654 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.40s 2025-05-26 03:29:31.555247 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-05-26 03:29:31.555680 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-26 03:29:31.962404 | orchestrator | + osism apply squid 2025-05-26 03:29:33.590348 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:29:33.590460 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:29:33.590476 | orchestrator | Registering Redlock._release_script 2025-05-26 03:29:33.656495 | orchestrator | 2025-05-26 03:29:33 | INFO  | Task df7edfc5-9e46-4d4a-8005-b814963005f1 (squid) was prepared for execution. 2025-05-26 03:29:33.656572 | orchestrator | 2025-05-26 03:29:33 | INFO  | It takes a moment until task df7edfc5-9e46-4d4a-8005-b814963005f1 (squid) has been started and output is visible here. 2025-05-26 03:29:37.719020 | orchestrator | 2025-05-26 03:29:37.721204 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-26 03:29:37.721241 | orchestrator | 2025-05-26 03:29:37.721300 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-26 03:29:37.722277 | orchestrator | Monday 26 May 2025 03:29:37 +0000 (0:00:00.182) 0:00:00.182 ************ 2025-05-26 03:29:37.814105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-26 03:29:37.814912 | orchestrator | 2025-05-26 03:29:37.814941 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-26 03:29:37.815576 | orchestrator | Monday 26 May 2025 03:29:37 +0000 (0:00:00.097) 0:00:00.280 ************ 2025-05-26 03:29:39.221789 | orchestrator | ok: [testbed-manager] 2025-05-26 03:29:39.222728 | orchestrator | 2025-05-26 03:29:39.223536 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-26 03:29:39.224129 | orchestrator | Monday 26 May 2025 03:29:39 +0000 (0:00:01.406) 0:00:01.686 ************ 2025-05-26 03:29:40.379947 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-26 03:29:40.380396 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-26 03:29:40.380762 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-26 03:29:40.381920 | orchestrator | 2025-05-26 03:29:40.382508 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-26 03:29:40.383810 | orchestrator | Monday 26 May 2025 03:29:40 +0000 (0:00:01.158) 0:00:02.845 ************ 2025-05-26 03:29:41.429282 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-26 03:29:41.430431 | orchestrator | 2025-05-26 03:29:41.430640 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-26 03:29:41.431493 | orchestrator | Monday 26 May 2025 03:29:41 +0000 (0:00:01.051) 0:00:03.896 ************ 2025-05-26 03:29:41.807489 | orchestrator | ok: [testbed-manager] 2025-05-26 03:29:41.808706 | orchestrator | 2025-05-26 03:29:41.808786 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-26 03:29:41.809744 | orchestrator | Monday 26 May 2025 03:29:41 +0000 (0:00:00.376) 0:00:04.273 ************ 2025-05-26 03:29:42.744109 | orchestrator | changed: [testbed-manager] 2025-05-26 03:29:42.744217 | orchestrator | 2025-05-26 03:29:42.744747 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-26 03:29:42.746085 | orchestrator | Monday 26 May 2025 03:29:42 +0000 (0:00:00.937) 0:00:05.210 ************ 2025-05-26 03:30:14.346537 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-26 03:30:14.346658 | orchestrator | ok: [testbed-manager] 2025-05-26 03:30:14.346906 | orchestrator | 2025-05-26 03:30:14.346929 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-26 03:30:14.346943 | orchestrator | Monday 26 May 2025 03:30:14 +0000 (0:00:31.597) 0:00:36.808 ************ 2025-05-26 03:30:26.808577 | orchestrator | changed: [testbed-manager] 2025-05-26 03:30:26.808703 | orchestrator | 2025-05-26 03:30:26.808739 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-26 03:30:26.808753 | orchestrator | Monday 26 May 2025 03:30:26 +0000 (0:00:12.463) 0:00:49.272 ************ 2025-05-26 03:31:26.895137 | orchestrator | Pausing for 60 seconds 2025-05-26 03:31:26.895264 | orchestrator | changed: [testbed-manager] 2025-05-26 03:31:26.895281 | orchestrator | 2025-05-26 03:31:26.895428 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-26 03:31:26.895449 | orchestrator | Monday 26 May 2025 03:31:26 +0000 (0:01:00.084) 0:01:49.357 ************ 2025-05-26 03:31:26.953027 | orchestrator | ok: [testbed-manager] 2025-05-26 03:31:26.953367 | orchestrator | 2025-05-26 03:31:26.954125 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-26 03:31:26.954980 | orchestrator | Monday 26 May 2025 03:31:26 +0000 (0:00:00.064) 0:01:49.421 ************ 2025-05-26 03:31:27.558456 | orchestrator | changed: [testbed-manager] 2025-05-26 03:31:27.558571 | orchestrator | 2025-05-26 03:31:27.559085 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:31:27.559364 | orchestrator | 2025-05-26 03:31:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:31:27.560641 | orchestrator | 2025-05-26 03:31:27 | INFO  | Please wait and do not abort execution. 2025-05-26 03:31:27.560735 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:31:27.561398 | orchestrator | 2025-05-26 03:31:27.563004 | orchestrator | 2025-05-26 03:31:27.564506 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:31:27.565154 | orchestrator | Monday 26 May 2025 03:31:27 +0000 (0:00:00.604) 0:01:50.026 ************ 2025-05-26 03:31:27.566466 | orchestrator | =============================================================================== 2025-05-26 03:31:27.567472 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-26 03:31:27.568179 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.60s 2025-05-26 03:31:27.569095 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.46s 2025-05-26 03:31:27.569706 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-05-26 03:31:27.570290 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2025-05-26 03:31:27.571011 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-05-26 03:31:27.571611 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2025-05-26 03:31:27.572420 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-05-26 03:31:27.573017 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-05-26 03:31:27.573950 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-05-26 03:31:27.574405 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-26 03:31:28.040886 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-26 03:31:28.040989 | orchestrator | ++ semver latest 9.0.0 2025-05-26 03:31:28.083178 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-26 03:31:28.083261 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-26 03:31:28.083584 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-26 03:31:29.725252 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:31:29.725366 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:31:29.725388 | orchestrator | Registering Redlock._release_script 2025-05-26 03:31:29.783107 | orchestrator | 2025-05-26 03:31:29 | INFO  | Task ec53fa0d-a299-4652-8453-b1f3de244f51 (operator) was prepared for execution. 2025-05-26 03:31:29.783190 | orchestrator | 2025-05-26 03:31:29 | INFO  | It takes a moment until task ec53fa0d-a299-4652-8453-b1f3de244f51 (operator) has been started and output is visible here. 2025-05-26 03:31:33.586472 | orchestrator | 2025-05-26 03:31:33.586592 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-26 03:31:33.588568 | orchestrator | 2025-05-26 03:31:33.588599 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-26 03:31:33.588612 | orchestrator | Monday 26 May 2025 03:31:33 +0000 (0:00:00.145) 0:00:00.145 ************ 2025-05-26 03:31:36.790390 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:36.790603 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:31:36.792039 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:36.792960 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:36.793546 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:31:36.794232 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:31:36.795387 | orchestrator | 2025-05-26 03:31:36.796219 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-26 03:31:36.796450 | orchestrator | Monday 26 May 2025 03:31:36 +0000 (0:00:03.205) 0:00:03.350 ************ 2025-05-26 03:31:37.550165 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:37.550284 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:31:37.550300 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:37.552153 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:37.552178 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:31:37.552189 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:31:37.552201 | orchestrator | 2025-05-26 03:31:37.552401 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-26 03:31:37.553052 | orchestrator | 2025-05-26 03:31:37.553711 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-26 03:31:37.554230 | orchestrator | Monday 26 May 2025 03:31:37 +0000 (0:00:00.757) 0:00:04.107 ************ 2025-05-26 03:31:37.636944 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:31:37.655176 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:31:37.678243 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:31:37.724989 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:37.726207 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:37.727097 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:37.730144 | orchestrator | 2025-05-26 03:31:37.730615 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-26 03:31:37.731358 | orchestrator | Monday 26 May 2025 03:31:37 +0000 (0:00:00.177) 0:00:04.284 ************ 2025-05-26 03:31:37.810792 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:31:37.828326 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:31:37.872963 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:31:37.875230 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:37.875259 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:37.875672 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:37.876711 | orchestrator | 2025-05-26 03:31:37.877595 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-26 03:31:37.878496 | orchestrator | Monday 26 May 2025 03:31:37 +0000 (0:00:00.149) 0:00:04.434 ************ 2025-05-26 03:31:38.450216 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:38.452792 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:38.452901 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:38.452916 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:38.452927 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:38.452938 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:38.453001 | orchestrator | 2025-05-26 03:31:38.453309 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-26 03:31:38.453503 | orchestrator | Monday 26 May 2025 03:31:38 +0000 (0:00:00.577) 0:00:05.012 ************ 2025-05-26 03:31:39.284269 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:39.284564 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:39.285325 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:39.285996 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:39.287670 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:39.288284 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:39.289244 | orchestrator | 2025-05-26 03:31:39.290362 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-26 03:31:39.290784 | orchestrator | Monday 26 May 2025 03:31:39 +0000 (0:00:00.831) 0:00:05.843 ************ 2025-05-26 03:31:40.399737 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-26 03:31:40.404217 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-26 03:31:40.404267 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-26 03:31:40.404657 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-26 03:31:40.405283 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-26 03:31:40.406211 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-26 03:31:40.406634 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-26 03:31:40.407675 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-26 03:31:40.408259 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-26 03:31:40.409603 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-26 03:31:40.410974 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-26 03:31:40.412072 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-26 03:31:40.413062 | orchestrator | 2025-05-26 03:31:40.413959 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-26 03:31:40.414757 | orchestrator | Monday 26 May 2025 03:31:40 +0000 (0:00:01.116) 0:00:06.959 ************ 2025-05-26 03:31:41.669123 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:41.670106 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:41.670769 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:41.671406 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:41.673189 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:41.673212 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:41.673225 | orchestrator | 2025-05-26 03:31:41.673789 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-26 03:31:41.676086 | orchestrator | Monday 26 May 2025 03:31:41 +0000 (0:00:01.270) 0:00:08.230 ************ 2025-05-26 03:31:42.819464 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-26 03:31:42.820263 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-26 03:31:42.820296 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-26 03:31:42.999943 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-26 03:31:43.000644 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-26 03:31:43.004270 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-26 03:31:43.004295 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-26 03:31:43.004307 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-26 03:31:43.005016 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-26 03:31:43.005087 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-26 03:31:43.005832 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-26 03:31:43.006742 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-26 03:31:43.007186 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-26 03:31:43.007644 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-26 03:31:43.008245 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-26 03:31:43.008781 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-26 03:31:43.009422 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-26 03:31:43.009784 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-26 03:31:43.010168 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-26 03:31:43.010845 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-26 03:31:43.012473 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-26 03:31:43.012655 | orchestrator | 2025-05-26 03:31:43.013135 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-26 03:31:43.013342 | orchestrator | Monday 26 May 2025 03:31:42 +0000 (0:00:01.331) 0:00:09.561 ************ 2025-05-26 03:31:43.544463 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:43.547971 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:43.548183 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:43.548532 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:43.548766 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:43.549242 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:43.549560 | orchestrator | 2025-05-26 03:31:43.550669 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-26 03:31:43.550986 | orchestrator | Monday 26 May 2025 03:31:43 +0000 (0:00:00.544) 0:00:10.105 ************ 2025-05-26 03:31:43.631787 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:31:43.652141 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:31:43.679195 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:31:43.744376 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:31:43.744452 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:31:43.745105 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:31:43.745643 | orchestrator | 2025-05-26 03:31:43.746361 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-26 03:31:43.749549 | orchestrator | Monday 26 May 2025 03:31:43 +0000 (0:00:00.197) 0:00:10.303 ************ 2025-05-26 03:31:44.437753 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-26 03:31:44.438562 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-26 03:31:44.440196 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:44.441721 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:44.443335 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-26 03:31:44.444437 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:44.445201 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-26 03:31:44.446102 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:44.446652 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-26 03:31:44.447438 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-26 03:31:44.449521 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:44.449969 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:44.450474 | orchestrator | 2025-05-26 03:31:44.451426 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-26 03:31:44.451663 | orchestrator | Monday 26 May 2025 03:31:44 +0000 (0:00:00.694) 0:00:10.998 ************ 2025-05-26 03:31:44.500779 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:31:44.522235 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:31:44.542181 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:31:44.577986 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:31:44.578168 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:31:44.578718 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:31:44.579317 | orchestrator | 2025-05-26 03:31:44.579907 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-26 03:31:44.581401 | orchestrator | Monday 26 May 2025 03:31:44 +0000 (0:00:00.142) 0:00:11.140 ************ 2025-05-26 03:31:44.633406 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:31:44.657873 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:31:44.679546 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:31:44.702170 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:31:44.738965 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:31:44.739399 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:31:44.740558 | orchestrator | 2025-05-26 03:31:44.741462 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-26 03:31:44.742105 | orchestrator | Monday 26 May 2025 03:31:44 +0000 (0:00:00.160) 0:00:11.301 ************ 2025-05-26 03:31:44.823978 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:31:44.849311 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:31:44.866963 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:31:44.900390 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:31:44.900530 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:31:44.900975 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:31:44.902165 | orchestrator | 2025-05-26 03:31:44.902189 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-26 03:31:44.902476 | orchestrator | Monday 26 May 2025 03:31:44 +0000 (0:00:00.161) 0:00:11.462 ************ 2025-05-26 03:31:45.541230 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:45.543026 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:45.544022 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:45.545155 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:45.545925 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:45.546710 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:45.547440 | orchestrator | 2025-05-26 03:31:45.548032 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-26 03:31:45.548700 | orchestrator | Monday 26 May 2025 03:31:45 +0000 (0:00:00.638) 0:00:12.101 ************ 2025-05-26 03:31:45.631875 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:31:45.659609 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:31:45.761793 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:31:45.762347 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:31:45.763516 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:31:45.764330 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:31:45.765030 | orchestrator | 2025-05-26 03:31:45.766358 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:31:45.766402 | orchestrator | 2025-05-26 03:31:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:31:45.766974 | orchestrator | 2025-05-26 03:31:45 | INFO  | Please wait and do not abort execution. 2025-05-26 03:31:45.767997 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 03:31:45.768926 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 03:31:45.769688 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 03:31:45.771732 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 03:31:45.772405 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 03:31:45.773074 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 03:31:45.773629 | orchestrator | 2025-05-26 03:31:45.774695 | orchestrator | 2025-05-26 03:31:45.775632 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:31:45.776301 | orchestrator | Monday 26 May 2025 03:31:45 +0000 (0:00:00.222) 0:00:12.324 ************ 2025-05-26 03:31:45.776989 | orchestrator | =============================================================================== 2025-05-26 03:31:45.777630 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2025-05-26 03:31:45.778359 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.33s 2025-05-26 03:31:45.779020 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-05-26 03:31:45.779431 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2025-05-26 03:31:45.779854 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-05-26 03:31:45.780575 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-05-26 03:31:45.780677 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-05-26 03:31:45.781076 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-05-26 03:31:45.781400 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-05-26 03:31:45.781778 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2025-05-26 03:31:45.782087 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-05-26 03:31:45.782412 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-05-26 03:31:45.782799 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-05-26 03:31:45.783301 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-26 03:31:45.783740 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-05-26 03:31:45.783955 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-05-26 03:31:45.784244 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-26 03:31:46.257762 | orchestrator | + osism apply --environment custom facts 2025-05-26 03:31:47.892198 | orchestrator | 2025-05-26 03:31:47 | INFO  | Trying to run play facts in environment custom 2025-05-26 03:31:47.896762 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:31:47.896829 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:31:47.896843 | orchestrator | Registering Redlock._release_script 2025-05-26 03:31:47.956775 | orchestrator | 2025-05-26 03:31:47 | INFO  | Task 56ace8f3-ba78-4187-afef-81e49ccea4fd (facts) was prepared for execution. 2025-05-26 03:31:47.956891 | orchestrator | 2025-05-26 03:31:47 | INFO  | It takes a moment until task 56ace8f3-ba78-4187-afef-81e49ccea4fd (facts) has been started and output is visible here. 2025-05-26 03:31:51.795622 | orchestrator | 2025-05-26 03:31:51.795735 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-26 03:31:51.796037 | orchestrator | 2025-05-26 03:31:51.797083 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-26 03:31:51.798416 | orchestrator | Monday 26 May 2025 03:31:51 +0000 (0:00:00.089) 0:00:00.089 ************ 2025-05-26 03:31:53.194536 | orchestrator | ok: [testbed-manager] 2025-05-26 03:31:53.196050 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:53.199080 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:53.199112 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:53.201122 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:53.201578 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:53.202396 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:53.204099 | orchestrator | 2025-05-26 03:31:53.205407 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-26 03:31:53.206134 | orchestrator | Monday 26 May 2025 03:31:53 +0000 (0:00:01.398) 0:00:01.487 ************ 2025-05-26 03:31:54.415079 | orchestrator | ok: [testbed-manager] 2025-05-26 03:31:54.415255 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:31:54.415844 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:54.417147 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:54.418851 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:31:54.419435 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:54.419752 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:31:54.420169 | orchestrator | 2025-05-26 03:31:54.420631 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-26 03:31:54.421015 | orchestrator | 2025-05-26 03:31:54.421395 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-26 03:31:54.421749 | orchestrator | Monday 26 May 2025 03:31:54 +0000 (0:00:01.221) 0:00:02.708 ************ 2025-05-26 03:31:54.552867 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:54.553073 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:54.553759 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:54.554593 | orchestrator | 2025-05-26 03:31:54.555086 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-26 03:31:54.555508 | orchestrator | Monday 26 May 2025 03:31:54 +0000 (0:00:00.140) 0:00:02.849 ************ 2025-05-26 03:31:54.745717 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:54.746392 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:54.746997 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:54.748060 | orchestrator | 2025-05-26 03:31:54.749010 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-26 03:31:54.749875 | orchestrator | Monday 26 May 2025 03:31:54 +0000 (0:00:00.192) 0:00:03.041 ************ 2025-05-26 03:31:54.931262 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:54.932030 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:54.932501 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:54.932974 | orchestrator | 2025-05-26 03:31:54.933433 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-26 03:31:54.933912 | orchestrator | Monday 26 May 2025 03:31:54 +0000 (0:00:00.186) 0:00:03.227 ************ 2025-05-26 03:31:55.065383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 03:31:55.065962 | orchestrator | 2025-05-26 03:31:55.067065 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-26 03:31:55.069195 | orchestrator | Monday 26 May 2025 03:31:55 +0000 (0:00:00.132) 0:00:03.360 ************ 2025-05-26 03:31:55.488900 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:55.490100 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:55.490380 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:55.491492 | orchestrator | 2025-05-26 03:31:55.492306 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-26 03:31:55.493189 | orchestrator | Monday 26 May 2025 03:31:55 +0000 (0:00:00.422) 0:00:03.783 ************ 2025-05-26 03:31:55.610352 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:31:55.614112 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:31:55.614141 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:31:55.614154 | orchestrator | 2025-05-26 03:31:55.614166 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-26 03:31:55.614180 | orchestrator | Monday 26 May 2025 03:31:55 +0000 (0:00:00.122) 0:00:03.905 ************ 2025-05-26 03:31:56.704697 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:56.705004 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:56.706014 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:56.706746 | orchestrator | 2025-05-26 03:31:56.707683 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-26 03:31:56.708503 | orchestrator | Monday 26 May 2025 03:31:56 +0000 (0:00:01.091) 0:00:04.997 ************ 2025-05-26 03:31:57.155751 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:31:57.155942 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:31:57.156025 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:31:57.157499 | orchestrator | 2025-05-26 03:31:57.158137 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-26 03:31:57.159126 | orchestrator | Monday 26 May 2025 03:31:57 +0000 (0:00:00.452) 0:00:05.450 ************ 2025-05-26 03:31:58.214553 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:31:58.215625 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:31:58.215721 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:31:58.216087 | orchestrator | 2025-05-26 03:31:58.216622 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-26 03:31:58.217353 | orchestrator | Monday 26 May 2025 03:31:58 +0000 (0:00:01.057) 0:00:06.508 ************ 2025-05-26 03:32:10.735104 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:10.735208 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:10.735223 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:10.735299 | orchestrator | 2025-05-26 03:32:10.736092 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-26 03:32:10.736984 | orchestrator | Monday 26 May 2025 03:32:10 +0000 (0:00:12.517) 0:00:19.025 ************ 2025-05-26 03:32:10.789213 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:32:10.824854 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:32:10.825285 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:32:10.826385 | orchestrator | 2025-05-26 03:32:10.827036 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-26 03:32:10.827718 | orchestrator | Monday 26 May 2025 03:32:10 +0000 (0:00:00.095) 0:00:19.120 ************ 2025-05-26 03:32:17.686966 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:17.687089 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:17.687105 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:17.688709 | orchestrator | 2025-05-26 03:32:17.690132 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-26 03:32:17.691008 | orchestrator | Monday 26 May 2025 03:32:17 +0000 (0:00:06.857) 0:00:25.978 ************ 2025-05-26 03:32:18.133630 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:18.135266 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:18.136580 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:18.137740 | orchestrator | 2025-05-26 03:32:18.139043 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-26 03:32:18.140588 | orchestrator | Monday 26 May 2025 03:32:18 +0000 (0:00:00.449) 0:00:26.428 ************ 2025-05-26 03:32:21.514556 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-26 03:32:21.515554 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-26 03:32:21.516242 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-26 03:32:21.517361 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-26 03:32:21.518300 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-26 03:32:21.520355 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-26 03:32:21.521517 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-26 03:32:21.522724 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-26 03:32:21.522908 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-26 03:32:21.523999 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-26 03:32:21.524822 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-26 03:32:21.525482 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-26 03:32:21.526238 | orchestrator | 2025-05-26 03:32:21.526780 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-26 03:32:21.527404 | orchestrator | Monday 26 May 2025 03:32:21 +0000 (0:00:03.379) 0:00:29.808 ************ 2025-05-26 03:32:22.623993 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:22.624620 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:22.625569 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:22.626623 | orchestrator | 2025-05-26 03:32:22.627928 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-26 03:32:22.628680 | orchestrator | 2025-05-26 03:32:22.629575 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-26 03:32:22.630416 | orchestrator | Monday 26 May 2025 03:32:22 +0000 (0:00:01.109) 0:00:30.917 ************ 2025-05-26 03:32:26.289312 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:26.289431 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:26.289446 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:26.289459 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:26.289470 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:26.289482 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:26.289493 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:26.289505 | orchestrator | 2025-05-26 03:32:26.289518 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:32:26.289619 | orchestrator | 2025-05-26 03:32:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:32:26.289636 | orchestrator | 2025-05-26 03:32:26 | INFO  | Please wait and do not abort execution. 2025-05-26 03:32:26.290300 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:32:26.291460 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:32:26.291482 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:32:26.291528 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:32:26.292151 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:32:26.292669 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:32:26.293212 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:32:26.293333 | orchestrator | 2025-05-26 03:32:26.293947 | orchestrator | 2025-05-26 03:32:26.294400 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:32:26.294525 | orchestrator | Monday 26 May 2025 03:32:26 +0000 (0:00:03.665) 0:00:34.583 ************ 2025-05-26 03:32:26.295055 | orchestrator | =============================================================================== 2025-05-26 03:32:26.295728 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.52s 2025-05-26 03:32:26.295934 | orchestrator | Install required packages (Debian) -------------------------------------- 6.86s 2025-05-26 03:32:26.297097 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.67s 2025-05-26 03:32:26.297364 | orchestrator | Copy fact files --------------------------------------------------------- 3.38s 2025-05-26 03:32:26.297741 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-05-26 03:32:26.298489 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2025-05-26 03:32:26.298810 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.11s 2025-05-26 03:32:26.299533 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2025-05-26 03:32:26.299834 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-05-26 03:32:26.300662 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-26 03:32:26.301327 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-05-26 03:32:26.302097 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-05-26 03:32:26.302460 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-05-26 03:32:26.303215 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-05-26 03:32:26.303674 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-05-26 03:32:26.304192 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-05-26 03:32:26.304622 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-05-26 03:32:26.305244 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-05-26 03:32:26.767106 | orchestrator | + osism apply bootstrap 2025-05-26 03:32:28.436329 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:32:28.437143 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:32:28.437176 | orchestrator | Registering Redlock._release_script 2025-05-26 03:32:28.509032 | orchestrator | 2025-05-26 03:32:28 | INFO  | Task cf8b8e18-dd5a-4c90-a66e-1d06a2bcd1c5 (bootstrap) was prepared for execution. 2025-05-26 03:32:28.509135 | orchestrator | 2025-05-26 03:32:28 | INFO  | It takes a moment until task cf8b8e18-dd5a-4c90-a66e-1d06a2bcd1c5 (bootstrap) has been started and output is visible here. 2025-05-26 03:32:32.660941 | orchestrator | 2025-05-26 03:32:32.662897 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-26 03:32:32.663237 | orchestrator | 2025-05-26 03:32:32.664294 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-26 03:32:32.664809 | orchestrator | Monday 26 May 2025 03:32:32 +0000 (0:00:00.166) 0:00:00.166 ************ 2025-05-26 03:32:32.744094 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:32.766275 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:32.798150 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:32.832650 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:32.917620 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:32.918416 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:32.919347 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:32.922510 | orchestrator | 2025-05-26 03:32:32.923318 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-26 03:32:32.923919 | orchestrator | 2025-05-26 03:32:32.924669 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-26 03:32:32.925354 | orchestrator | Monday 26 May 2025 03:32:32 +0000 (0:00:00.262) 0:00:00.429 ************ 2025-05-26 03:32:36.825209 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:36.825632 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:36.826497 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:36.827771 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:36.828891 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:36.829882 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:36.830867 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:36.831528 | orchestrator | 2025-05-26 03:32:36.831943 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-26 03:32:36.832648 | orchestrator | 2025-05-26 03:32:36.834085 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-26 03:32:36.835147 | orchestrator | Monday 26 May 2025 03:32:36 +0000 (0:00:03.905) 0:00:04.334 ************ 2025-05-26 03:32:36.893289 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-26 03:32:36.927903 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-26 03:32:36.927954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-26 03:32:36.928280 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-26 03:32:36.958561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-26 03:32:36.958708 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-26 03:32:36.958854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-26 03:32:36.982319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-26 03:32:36.982446 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-26 03:32:36.983057 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-26 03:32:36.983384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-26 03:32:36.983710 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-26 03:32:37.283293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-26 03:32:37.283491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-26 03:32:37.284396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-26 03:32:37.284933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-26 03:32:37.285588 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:32:37.289086 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-26 03:32:37.289111 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-26 03:32:37.289123 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-26 03:32:37.289714 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-26 03:32:37.290394 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-26 03:32:37.290925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-26 03:32:37.292160 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:37.293557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-26 03:32:37.294439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-26 03:32:37.295212 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-26 03:32:37.296007 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-26 03:32:37.296767 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-26 03:32:37.297575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-26 03:32:37.298060 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-26 03:32:37.299255 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-26 03:32:37.299623 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-26 03:32:37.300316 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-26 03:32:37.300902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-26 03:32:37.301672 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-26 03:32:37.302114 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-26 03:32:37.302615 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-26 03:32:37.303760 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-26 03:32:37.304091 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-26 03:32:37.304983 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-26 03:32:37.305493 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-26 03:32:37.306252 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-26 03:32:37.307666 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-26 03:32:37.308164 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:32:37.309128 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-26 03:32:37.313443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-26 03:32:37.314361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-26 03:32:37.314645 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:32:37.315513 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:32:37.316264 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-26 03:32:37.316963 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:32:37.317560 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-26 03:32:37.318265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-26 03:32:37.319023 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-26 03:32:37.319460 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:32:37.319915 | orchestrator | 2025-05-26 03:32:37.321113 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-26 03:32:37.322269 | orchestrator | 2025-05-26 03:32:37.322539 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-26 03:32:37.323172 | orchestrator | Monday 26 May 2025 03:32:37 +0000 (0:00:00.460) 0:00:04.794 ************ 2025-05-26 03:32:38.498661 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:38.498776 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:38.498913 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:38.499007 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:38.500836 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:38.501599 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:38.501896 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:38.502662 | orchestrator | 2025-05-26 03:32:38.503295 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-26 03:32:38.503876 | orchestrator | Monday 26 May 2025 03:32:38 +0000 (0:00:01.212) 0:00:06.007 ************ 2025-05-26 03:32:39.666671 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:39.667000 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:39.668304 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:39.669093 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:39.669939 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:39.670918 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:39.671835 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:39.672612 | orchestrator | 2025-05-26 03:32:39.673329 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-26 03:32:39.673833 | orchestrator | Monday 26 May 2025 03:32:39 +0000 (0:00:01.167) 0:00:07.174 ************ 2025-05-26 03:32:39.966159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:32:39.967317 | orchestrator | 2025-05-26 03:32:39.968259 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-26 03:32:39.969122 | orchestrator | Monday 26 May 2025 03:32:39 +0000 (0:00:00.300) 0:00:07.475 ************ 2025-05-26 03:32:41.952866 | orchestrator | changed: [testbed-manager] 2025-05-26 03:32:41.958152 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:41.961608 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:41.962812 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:41.963560 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:41.964697 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:41.965573 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:41.966387 | orchestrator | 2025-05-26 03:32:41.967176 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-26 03:32:41.968150 | orchestrator | Monday 26 May 2025 03:32:41 +0000 (0:00:01.984) 0:00:09.460 ************ 2025-05-26 03:32:42.022478 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:42.203367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:32:42.203973 | orchestrator | 2025-05-26 03:32:42.207324 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-26 03:32:42.207352 | orchestrator | Monday 26 May 2025 03:32:42 +0000 (0:00:00.251) 0:00:09.712 ************ 2025-05-26 03:32:43.211616 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:43.211763 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:43.211865 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:43.212325 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:43.213347 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:43.215335 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:43.215668 | orchestrator | 2025-05-26 03:32:43.216554 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-26 03:32:43.218213 | orchestrator | Monday 26 May 2025 03:32:43 +0000 (0:00:01.003) 0:00:10.716 ************ 2025-05-26 03:32:43.303208 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:43.764712 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:43.765008 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:43.765639 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:43.766223 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:43.766502 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:43.766980 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:43.767617 | orchestrator | 2025-05-26 03:32:43.768264 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-26 03:32:43.768569 | orchestrator | Monday 26 May 2025 03:32:43 +0000 (0:00:00.557) 0:00:11.274 ************ 2025-05-26 03:32:43.864090 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:32:43.890736 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:32:43.921211 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:32:44.192126 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:32:44.194279 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:32:44.194310 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:32:44.194628 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:44.195345 | orchestrator | 2025-05-26 03:32:44.196092 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-26 03:32:44.196968 | orchestrator | Monday 26 May 2025 03:32:44 +0000 (0:00:00.425) 0:00:11.700 ************ 2025-05-26 03:32:44.293428 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:44.323202 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:32:44.343665 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:32:44.425834 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:32:44.427125 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:32:44.427729 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:32:44.428967 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:32:44.429870 | orchestrator | 2025-05-26 03:32:44.430485 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-26 03:32:44.431540 | orchestrator | Monday 26 May 2025 03:32:44 +0000 (0:00:00.236) 0:00:11.936 ************ 2025-05-26 03:32:44.713863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:32:44.714154 | orchestrator | 2025-05-26 03:32:44.717192 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-26 03:32:44.717221 | orchestrator | Monday 26 May 2025 03:32:44 +0000 (0:00:00.286) 0:00:12.223 ************ 2025-05-26 03:32:45.028713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:32:45.029579 | orchestrator | 2025-05-26 03:32:45.031140 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-26 03:32:45.032083 | orchestrator | Monday 26 May 2025 03:32:45 +0000 (0:00:00.314) 0:00:12.537 ************ 2025-05-26 03:32:46.351026 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:46.351140 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:46.351214 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:46.351362 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:46.352369 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:46.353249 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:46.353707 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:46.354198 | orchestrator | 2025-05-26 03:32:46.355011 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-26 03:32:46.355375 | orchestrator | Monday 26 May 2025 03:32:46 +0000 (0:00:01.321) 0:00:13.858 ************ 2025-05-26 03:32:46.422477 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:46.461233 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:32:46.480718 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:32:46.512143 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:32:46.559241 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:32:46.559958 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:32:46.560378 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:32:46.560927 | orchestrator | 2025-05-26 03:32:46.564554 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-26 03:32:46.565059 | orchestrator | Monday 26 May 2025 03:32:46 +0000 (0:00:00.211) 0:00:14.070 ************ 2025-05-26 03:32:47.108677 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:47.109659 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:47.109721 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:47.112418 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:47.113160 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:47.115234 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:47.115654 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:47.116257 | orchestrator | 2025-05-26 03:32:47.116499 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-26 03:32:47.116969 | orchestrator | Monday 26 May 2025 03:32:47 +0000 (0:00:00.542) 0:00:14.613 ************ 2025-05-26 03:32:47.192176 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:47.220457 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:32:47.249568 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:32:47.273136 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:32:47.357539 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:32:47.358467 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:32:47.359116 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:32:47.362472 | orchestrator | 2025-05-26 03:32:47.363443 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-26 03:32:47.363982 | orchestrator | Monday 26 May 2025 03:32:47 +0000 (0:00:00.254) 0:00:14.868 ************ 2025-05-26 03:32:47.973046 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:47.973206 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:47.975373 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:47.975488 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:47.975514 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:47.976312 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:47.977110 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:47.977567 | orchestrator | 2025-05-26 03:32:47.978425 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-26 03:32:47.979047 | orchestrator | Monday 26 May 2025 03:32:47 +0000 (0:00:00.609) 0:00:15.477 ************ 2025-05-26 03:32:49.194454 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:49.195060 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:49.195456 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:49.195983 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:49.196828 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:49.198250 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:49.199918 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:49.200061 | orchestrator | 2025-05-26 03:32:49.202071 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-26 03:32:49.202563 | orchestrator | Monday 26 May 2025 03:32:49 +0000 (0:00:01.225) 0:00:16.703 ************ 2025-05-26 03:32:50.225658 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:50.225767 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:50.225837 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:50.226176 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:50.226408 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:50.226933 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:50.227660 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:50.228017 | orchestrator | 2025-05-26 03:32:50.228909 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-26 03:32:50.229313 | orchestrator | Monday 26 May 2025 03:32:50 +0000 (0:00:01.032) 0:00:17.735 ************ 2025-05-26 03:32:50.530777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:32:50.531356 | orchestrator | 2025-05-26 03:32:50.532428 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-26 03:32:50.532985 | orchestrator | Monday 26 May 2025 03:32:50 +0000 (0:00:00.305) 0:00:18.041 ************ 2025-05-26 03:32:50.603243 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:51.790874 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:51.791620 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:32:51.792995 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:51.793882 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:32:51.795540 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:32:51.796126 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:51.796961 | orchestrator | 2025-05-26 03:32:51.798161 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-26 03:32:51.798191 | orchestrator | Monday 26 May 2025 03:32:51 +0000 (0:00:01.255) 0:00:19.296 ************ 2025-05-26 03:32:51.865292 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:51.891362 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:51.917668 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:51.951646 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:52.029353 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:52.030413 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:52.031219 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:52.032113 | orchestrator | 2025-05-26 03:32:52.032912 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-26 03:32:52.033656 | orchestrator | Monday 26 May 2025 03:32:52 +0000 (0:00:00.242) 0:00:19.539 ************ 2025-05-26 03:32:52.105747 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:52.133270 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:52.164918 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:52.188197 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:52.272044 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:52.273221 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:52.275102 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:52.275141 | orchestrator | 2025-05-26 03:32:52.275899 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-26 03:32:52.276685 | orchestrator | Monday 26 May 2025 03:32:52 +0000 (0:00:00.243) 0:00:19.782 ************ 2025-05-26 03:32:52.351686 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:52.374993 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:52.407803 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:52.431175 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:52.497561 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:52.498145 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:52.498721 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:52.499209 | orchestrator | 2025-05-26 03:32:52.500342 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-26 03:32:52.501729 | orchestrator | Monday 26 May 2025 03:32:52 +0000 (0:00:00.225) 0:00:20.008 ************ 2025-05-26 03:32:52.794933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:32:52.795403 | orchestrator | 2025-05-26 03:32:52.796434 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-26 03:32:52.797355 | orchestrator | Monday 26 May 2025 03:32:52 +0000 (0:00:00.295) 0:00:20.304 ************ 2025-05-26 03:32:53.324152 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:53.325057 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:53.325813 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:53.326614 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:53.328563 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:53.331284 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:53.331993 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:53.333235 | orchestrator | 2025-05-26 03:32:53.333916 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-26 03:32:53.334634 | orchestrator | Monday 26 May 2025 03:32:53 +0000 (0:00:00.528) 0:00:20.833 ************ 2025-05-26 03:32:53.421313 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:32:53.445841 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:32:53.474440 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:32:53.540135 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:32:53.542607 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:32:53.548199 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:32:53.548928 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:32:53.550125 | orchestrator | 2025-05-26 03:32:53.550933 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-26 03:32:53.551578 | orchestrator | Monday 26 May 2025 03:32:53 +0000 (0:00:00.215) 0:00:21.049 ************ 2025-05-26 03:32:54.589095 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:54.589208 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:54.589223 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:54.589304 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:54.590200 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:54.591677 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:54.592311 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:54.594678 | orchestrator | 2025-05-26 03:32:54.595556 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-26 03:32:54.597545 | orchestrator | Monday 26 May 2025 03:32:54 +0000 (0:00:01.044) 0:00:22.094 ************ 2025-05-26 03:32:55.192203 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:55.193316 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:55.194489 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:55.194974 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:55.197990 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:32:55.198202 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:32:55.199182 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:32:55.199904 | orchestrator | 2025-05-26 03:32:55.201576 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-26 03:32:55.202482 | orchestrator | Monday 26 May 2025 03:32:55 +0000 (0:00:00.607) 0:00:22.702 ************ 2025-05-26 03:32:56.323455 | orchestrator | ok: [testbed-manager] 2025-05-26 03:32:56.325221 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:32:56.325253 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:32:56.325439 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:32:56.326262 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:32:56.327075 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:32:56.327891 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:32:56.328538 | orchestrator | 2025-05-26 03:32:56.329361 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-26 03:32:56.330476 | orchestrator | Monday 26 May 2025 03:32:56 +0000 (0:00:01.129) 0:00:23.831 ************ 2025-05-26 03:33:10.040245 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:10.040370 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:10.040451 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:10.040468 | orchestrator | changed: [testbed-manager] 2025-05-26 03:33:10.040480 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:33:10.041539 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:33:10.043423 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:33:10.044005 | orchestrator | 2025-05-26 03:33:10.045337 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-26 03:33:10.046154 | orchestrator | Monday 26 May 2025 03:33:10 +0000 (0:00:13.714) 0:00:37.545 ************ 2025-05-26 03:33:10.111718 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:10.139606 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:10.164825 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:10.190654 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:10.250713 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:10.251181 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:10.251209 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:10.251620 | orchestrator | 2025-05-26 03:33:10.255310 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-26 03:33:10.255464 | orchestrator | Monday 26 May 2025 03:33:10 +0000 (0:00:00.215) 0:00:37.761 ************ 2025-05-26 03:33:10.324851 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:10.350637 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:10.377068 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:10.402439 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:10.481985 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:10.482423 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:10.483147 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:10.483585 | orchestrator | 2025-05-26 03:33:10.484036 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-26 03:33:10.485466 | orchestrator | Monday 26 May 2025 03:33:10 +0000 (0:00:00.230) 0:00:37.992 ************ 2025-05-26 03:33:10.560146 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:10.594648 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:10.619987 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:10.655687 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:10.724084 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:10.725716 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:10.729455 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:10.730547 | orchestrator | 2025-05-26 03:33:10.731741 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-26 03:33:10.732563 | orchestrator | Monday 26 May 2025 03:33:10 +0000 (0:00:00.241) 0:00:38.233 ************ 2025-05-26 03:33:11.024859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:33:11.028257 | orchestrator | 2025-05-26 03:33:11.028292 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-26 03:33:11.028306 | orchestrator | Monday 26 May 2025 03:33:11 +0000 (0:00:00.300) 0:00:38.533 ************ 2025-05-26 03:33:12.653749 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:12.654512 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:12.655739 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:12.656070 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:12.656882 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:12.657909 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:12.658530 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:12.658945 | orchestrator | 2025-05-26 03:33:12.659259 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-26 03:33:12.659591 | orchestrator | Monday 26 May 2025 03:33:12 +0000 (0:00:01.624) 0:00:40.158 ************ 2025-05-26 03:33:13.722711 | orchestrator | changed: [testbed-manager] 2025-05-26 03:33:13.723810 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:33:13.724111 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:33:13.725963 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:33:13.727107 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:33:13.727848 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:33:13.728655 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:33:13.729669 | orchestrator | 2025-05-26 03:33:13.730516 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-26 03:33:13.731483 | orchestrator | Monday 26 May 2025 03:33:13 +0000 (0:00:01.072) 0:00:41.231 ************ 2025-05-26 03:33:14.558614 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:14.558723 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:14.558913 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:14.559198 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:14.559599 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:14.560065 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:14.561710 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:14.562234 | orchestrator | 2025-05-26 03:33:14.563505 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-26 03:33:14.564232 | orchestrator | Monday 26 May 2025 03:33:14 +0000 (0:00:00.830) 0:00:42.061 ************ 2025-05-26 03:33:14.855214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:33:14.856316 | orchestrator | 2025-05-26 03:33:14.857303 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-26 03:33:14.860055 | orchestrator | Monday 26 May 2025 03:33:14 +0000 (0:00:00.302) 0:00:42.364 ************ 2025-05-26 03:33:15.880826 | orchestrator | changed: [testbed-manager] 2025-05-26 03:33:15.881974 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:33:15.883012 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:33:15.884122 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:33:15.884993 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:33:15.886130 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:33:15.887181 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:33:15.889351 | orchestrator | 2025-05-26 03:33:15.889376 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-26 03:33:15.889390 | orchestrator | Monday 26 May 2025 03:33:15 +0000 (0:00:01.023) 0:00:43.387 ************ 2025-05-26 03:33:15.956747 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:33:15.976426 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:33:16.005274 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:33:16.023881 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:33:16.189468 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:33:16.189959 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:33:16.191393 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:33:16.192189 | orchestrator | 2025-05-26 03:33:16.192968 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-26 03:33:16.193515 | orchestrator | Monday 26 May 2025 03:33:16 +0000 (0:00:00.311) 0:00:43.699 ************ 2025-05-26 03:33:28.070920 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:33:28.071103 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:33:28.071119 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:33:28.071131 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:33:28.072432 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:33:28.073462 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:33:28.074576 | orchestrator | changed: [testbed-manager] 2025-05-26 03:33:28.076218 | orchestrator | 2025-05-26 03:33:28.077217 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-26 03:33:28.078242 | orchestrator | Monday 26 May 2025 03:33:28 +0000 (0:00:11.872) 0:00:55.572 ************ 2025-05-26 03:33:29.667372 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:29.667842 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:29.670510 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:29.670935 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:29.672916 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:29.674276 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:29.677047 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:29.677985 | orchestrator | 2025-05-26 03:33:29.678923 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-26 03:33:29.680541 | orchestrator | Monday 26 May 2025 03:33:29 +0000 (0:00:01.601) 0:00:57.174 ************ 2025-05-26 03:33:30.583379 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:30.583542 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:30.585187 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:30.586004 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:30.586935 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:30.588008 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:30.588641 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:30.589879 | orchestrator | 2025-05-26 03:33:30.590509 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-26 03:33:30.591323 | orchestrator | Monday 26 May 2025 03:33:30 +0000 (0:00:00.918) 0:00:58.092 ************ 2025-05-26 03:33:30.678246 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:30.716156 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:30.742454 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:30.775379 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:30.840503 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:30.841988 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:30.843399 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:30.844622 | orchestrator | 2025-05-26 03:33:30.845941 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-26 03:33:30.847140 | orchestrator | Monday 26 May 2025 03:33:30 +0000 (0:00:00.257) 0:00:58.350 ************ 2025-05-26 03:33:30.931828 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:30.965253 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:30.992185 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:31.026836 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:31.098208 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:31.099120 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:31.100217 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:31.100238 | orchestrator | 2025-05-26 03:33:31.100252 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-26 03:33:31.100338 | orchestrator | Monday 26 May 2025 03:33:31 +0000 (0:00:00.257) 0:00:58.608 ************ 2025-05-26 03:33:31.455627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:33:31.455717 | orchestrator | 2025-05-26 03:33:31.455869 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-26 03:33:31.456550 | orchestrator | Monday 26 May 2025 03:33:31 +0000 (0:00:00.357) 0:00:58.966 ************ 2025-05-26 03:33:33.009623 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:33.009910 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:33.010609 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:33.011990 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:33.012932 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:33.013447 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:33.014099 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:33.014880 | orchestrator | 2025-05-26 03:33:33.015312 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-26 03:33:33.016371 | orchestrator | Monday 26 May 2025 03:33:33 +0000 (0:00:01.551) 0:01:00.517 ************ 2025-05-26 03:33:33.558344 | orchestrator | changed: [testbed-manager] 2025-05-26 03:33:33.558565 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:33:33.561296 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:33:33.564273 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:33:33.564378 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:33:33.564507 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:33:33.565213 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:33:33.566423 | orchestrator | 2025-05-26 03:33:33.568874 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-26 03:33:33.569707 | orchestrator | Monday 26 May 2025 03:33:33 +0000 (0:00:00.548) 0:01:01.067 ************ 2025-05-26 03:33:33.654643 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:33.681090 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:33.707980 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:33.732646 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:33.807895 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:33.808358 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:33.809321 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:33.812308 | orchestrator | 2025-05-26 03:33:33.812341 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-26 03:33:33.812357 | orchestrator | Monday 26 May 2025 03:33:33 +0000 (0:00:00.251) 0:01:01.318 ************ 2025-05-26 03:33:35.023531 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:35.026002 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:35.026124 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:35.026571 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:35.027219 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:35.027870 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:35.029188 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:35.031330 | orchestrator | 2025-05-26 03:33:35.031424 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-26 03:33:35.032240 | orchestrator | Monday 26 May 2025 03:33:35 +0000 (0:00:01.213) 0:01:02.531 ************ 2025-05-26 03:33:36.771206 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:33:36.771916 | orchestrator | changed: [testbed-manager] 2025-05-26 03:33:36.771951 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:33:36.771965 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:33:36.772300 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:33:36.772629 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:33:36.776294 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:33:36.776452 | orchestrator | 2025-05-26 03:33:36.776667 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-26 03:33:36.777043 | orchestrator | Monday 26 May 2025 03:33:36 +0000 (0:00:01.749) 0:01:04.281 ************ 2025-05-26 03:33:38.882678 | orchestrator | ok: [testbed-manager] 2025-05-26 03:33:38.883460 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:33:38.885841 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:33:38.886914 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:33:38.887824 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:33:38.888280 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:33:38.889301 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:33:38.889903 | orchestrator | 2025-05-26 03:33:38.890683 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-26 03:33:38.891336 | orchestrator | Monday 26 May 2025 03:33:38 +0000 (0:00:02.109) 0:01:06.390 ************ 2025-05-26 03:34:14.107495 | orchestrator | ok: [testbed-manager] 2025-05-26 03:34:14.107706 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:34:14.107862 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:34:14.107891 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:34:14.107912 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:34:14.107933 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:34:14.107952 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:34:14.107974 | orchestrator | 2025-05-26 03:34:14.108128 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-26 03:34:14.108517 | orchestrator | Monday 26 May 2025 03:34:14 +0000 (0:00:35.218) 0:01:41.609 ************ 2025-05-26 03:35:28.845790 | orchestrator | changed: [testbed-manager] 2025-05-26 03:35:28.845951 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:35:28.846262 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:35:28.847022 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:35:28.848703 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:35:28.849586 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:35:28.850353 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:35:28.851264 | orchestrator | 2025-05-26 03:35:28.852116 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-26 03:35:28.852771 | orchestrator | Monday 26 May 2025 03:35:28 +0000 (0:01:14.742) 0:02:56.351 ************ 2025-05-26 03:35:30.452470 | orchestrator | ok: [testbed-manager] 2025-05-26 03:35:30.452648 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:35:30.454255 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:35:30.455438 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:35:30.457046 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:35:30.457295 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:35:30.458185 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:35:30.459199 | orchestrator | 2025-05-26 03:35:30.459885 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-26 03:35:30.460475 | orchestrator | Monday 26 May 2025 03:35:30 +0000 (0:00:01.607) 0:02:57.959 ************ 2025-05-26 03:35:42.005851 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:35:42.005970 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:35:42.005985 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:35:42.005997 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:35:42.006568 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:35:42.007462 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:35:42.009150 | orchestrator | changed: [testbed-manager] 2025-05-26 03:35:42.012949 | orchestrator | 2025-05-26 03:35:42.012980 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-26 03:35:42.013036 | orchestrator | Monday 26 May 2025 03:35:41 +0000 (0:00:11.549) 0:03:09.508 ************ 2025-05-26 03:35:42.379327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-26 03:35:42.379460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-26 03:35:42.380652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-26 03:35:42.381521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-26 03:35:42.382626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-26 03:35:42.383874 | orchestrator | 2025-05-26 03:35:42.387219 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-26 03:35:42.387425 | orchestrator | Monday 26 May 2025 03:35:42 +0000 (0:00:00.380) 0:03:09.888 ************ 2025-05-26 03:35:42.435613 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-26 03:35:42.463394 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:35:42.463546 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-26 03:35:42.497352 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:35:42.497401 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-26 03:35:42.497415 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-26 03:35:42.521159 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:35:42.552188 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:35:43.066922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-26 03:35:43.067644 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-26 03:35:43.068533 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-26 03:35:43.069457 | orchestrator | 2025-05-26 03:35:43.070104 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-26 03:35:43.070830 | orchestrator | Monday 26 May 2025 03:35:43 +0000 (0:00:00.686) 0:03:10.575 ************ 2025-05-26 03:35:43.127372 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-26 03:35:43.128204 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-26 03:35:43.129520 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-26 03:35:43.130792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-26 03:35:43.134116 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-26 03:35:43.174477 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-26 03:35:43.174533 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-26 03:35:43.174624 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-26 03:35:43.174947 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-26 03:35:43.175200 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-26 03:35:43.175380 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-26 03:35:43.175821 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-26 03:35:43.176001 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-26 03:35:43.176372 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-26 03:35:43.176630 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-26 03:35:43.177025 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-26 03:35:43.177305 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-26 03:35:43.177557 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-26 03:35:43.177851 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-26 03:35:43.178057 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-26 03:35:43.178339 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-26 03:35:43.210290 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:35:43.210387 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-26 03:35:43.210677 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-26 03:35:43.211321 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-26 03:35:43.211626 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-26 03:35:43.212383 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-26 03:35:43.212671 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-26 03:35:43.213220 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-26 03:35:43.213542 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-26 03:35:43.214170 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-26 03:35:43.214445 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-26 03:35:43.215082 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-26 03:35:43.248183 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:35:43.248811 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-26 03:35:43.249819 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-26 03:35:43.250279 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-26 03:35:43.251609 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-26 03:35:43.253468 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-26 03:35:43.256215 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-26 03:35:43.257287 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-26 03:35:43.258203 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-26 03:35:43.269987 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:35:46.862482 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:35:46.863525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-26 03:35:46.864306 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-26 03:35:46.866906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-26 03:35:46.867884 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-26 03:35:46.868929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-26 03:35:46.869812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-26 03:35:46.871283 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-26 03:35:46.872541 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-26 03:35:46.873133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-26 03:35:46.875020 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-26 03:35:46.875745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-26 03:35:46.877652 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-26 03:35:46.878388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-26 03:35:46.879271 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-26 03:35:46.880106 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-26 03:35:46.880585 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-26 03:35:46.881745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-26 03:35:46.881920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-26 03:35:46.883009 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-26 03:35:46.883309 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-26 03:35:46.883928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-26 03:35:46.885210 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-26 03:35:46.886313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-26 03:35:46.887179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-26 03:35:46.887899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-26 03:35:46.888751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-26 03:35:46.889279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-26 03:35:46.893132 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-26 03:35:46.893867 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-26 03:35:46.895056 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-26 03:35:46.895833 | orchestrator | 2025-05-26 03:35:46.896690 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-26 03:35:46.897810 | orchestrator | Monday 26 May 2025 03:35:46 +0000 (0:00:03.795) 0:03:14.370 ************ 2025-05-26 03:35:47.464621 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-26 03:35:47.464929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-26 03:35:47.466911 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-26 03:35:47.466941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-26 03:35:47.467909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-26 03:35:47.468454 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-26 03:35:47.468961 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-26 03:35:47.469478 | orchestrator | 2025-05-26 03:35:47.470152 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-26 03:35:47.470776 | orchestrator | Monday 26 May 2025 03:35:47 +0000 (0:00:00.603) 0:03:14.974 ************ 2025-05-26 03:35:47.529558 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-26 03:35:47.555051 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:35:47.632516 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-26 03:35:47.633164 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-26 03:35:47.966116 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:35:47.966322 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:35:47.967114 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-26 03:35:47.968042 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:35:47.969553 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-26 03:35:47.970373 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-26 03:35:47.970990 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-26 03:35:47.971487 | orchestrator | 2025-05-26 03:35:47.972398 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-26 03:35:47.973201 | orchestrator | Monday 26 May 2025 03:35:47 +0000 (0:00:00.500) 0:03:15.475 ************ 2025-05-26 03:35:48.023281 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-26 03:35:48.046511 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:35:48.108418 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-26 03:35:48.137273 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:35:48.521511 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-26 03:35:48.521662 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:35:48.522920 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-26 03:35:48.524250 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:35:48.524906 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-26 03:35:48.525891 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-26 03:35:48.526966 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-26 03:35:48.527958 | orchestrator | 2025-05-26 03:35:48.528153 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-26 03:35:48.529004 | orchestrator | Monday 26 May 2025 03:35:48 +0000 (0:00:00.556) 0:03:16.031 ************ 2025-05-26 03:35:48.606192 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:35:48.629667 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:35:48.655223 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:35:48.677953 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:35:48.797864 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:35:48.798002 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:35:48.799111 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:35:48.800578 | orchestrator | 2025-05-26 03:35:48.801086 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-26 03:35:48.802184 | orchestrator | Monday 26 May 2025 03:35:48 +0000 (0:00:00.276) 0:03:16.307 ************ 2025-05-26 03:35:54.467400 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:35:54.467520 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:35:54.468244 | orchestrator | ok: [testbed-manager] 2025-05-26 03:35:54.469959 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:35:54.470245 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:35:54.471103 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:35:54.471832 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:35:54.472476 | orchestrator | 2025-05-26 03:35:54.474196 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-26 03:35:54.474587 | orchestrator | Monday 26 May 2025 03:35:54 +0000 (0:00:05.668) 0:03:21.975 ************ 2025-05-26 03:35:54.538758 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-26 03:35:54.570459 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-26 03:35:54.570625 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:35:54.571123 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-26 03:35:54.605203 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:35:54.605652 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-26 03:35:54.648194 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:35:54.648928 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-26 03:35:54.686545 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:35:54.686953 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-26 03:35:54.748855 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:35:54.750204 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:35:54.753412 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-26 03:35:54.753434 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:35:54.753444 | orchestrator | 2025-05-26 03:35:54.753454 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-26 03:35:54.753465 | orchestrator | Monday 26 May 2025 03:35:54 +0000 (0:00:00.283) 0:03:22.259 ************ 2025-05-26 03:35:55.738521 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-26 03:35:55.738693 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-26 03:35:55.740572 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-26 03:35:55.742263 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-26 03:35:55.743059 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-26 03:35:55.744501 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-26 03:35:55.746750 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-26 03:35:55.748137 | orchestrator | 2025-05-26 03:35:55.748773 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-26 03:35:55.750082 | orchestrator | Monday 26 May 2025 03:35:55 +0000 (0:00:00.984) 0:03:23.243 ************ 2025-05-26 03:35:56.231760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:35:56.234103 | orchestrator | 2025-05-26 03:35:56.238523 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-26 03:35:56.242158 | orchestrator | Monday 26 May 2025 03:35:56 +0000 (0:00:00.496) 0:03:23.740 ************ 2025-05-26 03:35:57.443906 | orchestrator | ok: [testbed-manager] 2025-05-26 03:35:57.444019 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:35:57.445143 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:35:57.445808 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:35:57.446727 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:35:57.447404 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:35:57.448582 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:35:57.448871 | orchestrator | 2025-05-26 03:35:57.450392 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-26 03:35:57.451196 | orchestrator | Monday 26 May 2025 03:35:57 +0000 (0:00:01.210) 0:03:24.950 ************ 2025-05-26 03:35:58.056812 | orchestrator | ok: [testbed-manager] 2025-05-26 03:35:58.058201 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:35:58.060477 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:35:58.062550 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:35:58.063566 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:35:58.065196 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:35:58.066185 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:35:58.067284 | orchestrator | 2025-05-26 03:35:58.068135 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-26 03:35:58.068921 | orchestrator | Monday 26 May 2025 03:35:58 +0000 (0:00:00.616) 0:03:25.567 ************ 2025-05-26 03:35:58.771820 | orchestrator | changed: [testbed-manager] 2025-05-26 03:35:58.772144 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:35:58.772421 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:35:58.772777 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:35:58.773239 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:35:58.773618 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:35:58.774164 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:35:58.774334 | orchestrator | 2025-05-26 03:35:58.774688 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-26 03:35:58.774975 | orchestrator | Monday 26 May 2025 03:35:58 +0000 (0:00:00.710) 0:03:26.278 ************ 2025-05-26 03:35:59.346986 | orchestrator | ok: [testbed-manager] 2025-05-26 03:35:59.347154 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:35:59.347917 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:35:59.348928 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:35:59.350266 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:35:59.350903 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:35:59.351272 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:35:59.352238 | orchestrator | 2025-05-26 03:35:59.353116 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-26 03:35:59.355418 | orchestrator | Monday 26 May 2025 03:35:59 +0000 (0:00:00.576) 0:03:26.854 ************ 2025-05-26 03:36:00.254261 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748228659.2307508, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.254378 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748228717.6955686, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.255087 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748228704.7614477, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.255316 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748228709.7957134, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.256997 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748228715.3998442, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.258438 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748228708.1590962, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.258470 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748228717.5478823, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.258499 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748228713.874807, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.258786 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748228639.76062, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.259122 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748228624.8969955, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.260023 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748228626.683487, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.260203 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748228637.1514347, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.260338 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748228625.293636, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.260905 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748228634.8919344, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-26 03:36:00.261098 | orchestrator | 2025-05-26 03:36:00.261475 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-26 03:36:00.261939 | orchestrator | Monday 26 May 2025 03:36:00 +0000 (0:00:00.910) 0:03:27.764 ************ 2025-05-26 03:36:01.371997 | orchestrator | changed: [testbed-manager] 2025-05-26 03:36:01.372526 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:01.373374 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:01.374240 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:01.375123 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:01.375462 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:01.376398 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:01.380594 | orchestrator | 2025-05-26 03:36:01.380625 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-26 03:36:01.380639 | orchestrator | Monday 26 May 2025 03:36:01 +0000 (0:00:01.115) 0:03:28.880 ************ 2025-05-26 03:36:02.557839 | orchestrator | changed: [testbed-manager] 2025-05-26 03:36:02.558926 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:02.560145 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:02.561926 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:02.563071 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:02.564449 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:02.565438 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:02.566455 | orchestrator | 2025-05-26 03:36:02.567080 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-26 03:36:02.568224 | orchestrator | Monday 26 May 2025 03:36:02 +0000 (0:00:01.185) 0:03:30.066 ************ 2025-05-26 03:36:03.805441 | orchestrator | changed: [testbed-manager] 2025-05-26 03:36:03.805603 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:03.808931 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:03.810806 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:03.811855 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:03.813003 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:03.815149 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:03.816105 | orchestrator | 2025-05-26 03:36:03.817103 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-26 03:36:03.817894 | orchestrator | Monday 26 May 2025 03:36:03 +0000 (0:00:01.247) 0:03:31.314 ************ 2025-05-26 03:36:03.923692 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:36:03.956860 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:36:03.989465 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:36:04.023512 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:36:04.099621 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:36:04.099868 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:36:04.100679 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:36:04.102426 | orchestrator | 2025-05-26 03:36:04.103595 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-26 03:36:04.104594 | orchestrator | Monday 26 May 2025 03:36:04 +0000 (0:00:00.295) 0:03:31.610 ************ 2025-05-26 03:36:04.877620 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:04.877828 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:36:04.877846 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:36:04.877954 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:36:04.878010 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:36:04.878892 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:36:04.879621 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:36:04.880394 | orchestrator | 2025-05-26 03:36:04.881208 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-26 03:36:04.882008 | orchestrator | Monday 26 May 2025 03:36:04 +0000 (0:00:00.773) 0:03:32.383 ************ 2025-05-26 03:36:05.246361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:36:05.246633 | orchestrator | 2025-05-26 03:36:05.247683 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-26 03:36:05.248502 | orchestrator | Monday 26 May 2025 03:36:05 +0000 (0:00:00.372) 0:03:32.755 ************ 2025-05-26 03:36:12.720355 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:12.720445 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:12.720760 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:12.722215 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:12.723917 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:12.725542 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:12.726891 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:12.728219 | orchestrator | 2025-05-26 03:36:12.729336 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-26 03:36:12.730202 | orchestrator | Monday 26 May 2025 03:36:12 +0000 (0:00:07.473) 0:03:40.228 ************ 2025-05-26 03:36:13.946813 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:13.947935 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:36:13.947981 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:36:13.948504 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:36:13.948741 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:36:13.949579 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:36:13.950525 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:36:13.951271 | orchestrator | 2025-05-26 03:36:13.951720 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-26 03:36:13.952421 | orchestrator | Monday 26 May 2025 03:36:13 +0000 (0:00:01.225) 0:03:41.454 ************ 2025-05-26 03:36:14.932505 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:36:14.933888 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:14.935413 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:36:14.936569 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:36:14.938752 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:36:14.939050 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:36:14.941017 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:36:14.941325 | orchestrator | 2025-05-26 03:36:14.943493 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-26 03:36:14.943569 | orchestrator | Monday 26 May 2025 03:36:14 +0000 (0:00:00.985) 0:03:42.440 ************ 2025-05-26 03:36:15.419449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:36:15.420518 | orchestrator | 2025-05-26 03:36:15.422353 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-26 03:36:15.423291 | orchestrator | Monday 26 May 2025 03:36:15 +0000 (0:00:00.487) 0:03:42.927 ************ 2025-05-26 03:36:23.883217 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:23.885247 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:23.885300 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:23.887109 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:23.888268 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:23.889436 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:23.890322 | orchestrator | changed: [testbed-manager] 2025-05-26 03:36:23.890969 | orchestrator | 2025-05-26 03:36:23.892231 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-26 03:36:23.892394 | orchestrator | Monday 26 May 2025 03:36:23 +0000 (0:00:08.461) 0:03:51.389 ************ 2025-05-26 03:36:24.493793 | orchestrator | changed: [testbed-manager] 2025-05-26 03:36:24.494548 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:24.497896 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:24.497976 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:24.498404 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:24.499207 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:24.499477 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:24.499848 | orchestrator | 2025-05-26 03:36:24.500353 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-26 03:36:24.501119 | orchestrator | Monday 26 May 2025 03:36:24 +0000 (0:00:00.612) 0:03:52.001 ************ 2025-05-26 03:36:25.628630 | orchestrator | changed: [testbed-manager] 2025-05-26 03:36:25.631571 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:25.633085 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:25.633908 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:25.635042 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:25.636256 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:25.637096 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:25.638118 | orchestrator | 2025-05-26 03:36:25.639069 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-26 03:36:25.639456 | orchestrator | Monday 26 May 2025 03:36:25 +0000 (0:00:01.135) 0:03:53.137 ************ 2025-05-26 03:36:26.664423 | orchestrator | changed: [testbed-manager] 2025-05-26 03:36:26.665298 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:36:26.666359 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:36:26.667897 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:36:26.668419 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:36:26.669621 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:36:26.671036 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:36:26.672098 | orchestrator | 2025-05-26 03:36:26.673127 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-26 03:36:26.673936 | orchestrator | Monday 26 May 2025 03:36:26 +0000 (0:00:01.034) 0:03:54.171 ************ 2025-05-26 03:36:26.753403 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:26.824222 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:36:26.864812 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:36:26.908302 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:36:26.981554 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:36:26.982178 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:36:26.983136 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:36:26.983166 | orchestrator | 2025-05-26 03:36:26.984808 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-26 03:36:26.984832 | orchestrator | Monday 26 May 2025 03:36:26 +0000 (0:00:00.320) 0:03:54.492 ************ 2025-05-26 03:36:27.084819 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:27.118076 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:36:27.150951 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:36:27.201462 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:36:27.284135 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:36:27.284224 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:36:27.284237 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:36:27.284995 | orchestrator | 2025-05-26 03:36:27.285019 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-26 03:36:27.285034 | orchestrator | Monday 26 May 2025 03:36:27 +0000 (0:00:00.298) 0:03:54.791 ************ 2025-05-26 03:36:27.385649 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:27.418862 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:36:27.454488 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:36:27.489043 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:36:27.564617 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:36:27.566001 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:36:27.566938 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:36:27.567924 | orchestrator | 2025-05-26 03:36:27.568731 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-26 03:36:27.569228 | orchestrator | Monday 26 May 2025 03:36:27 +0000 (0:00:00.284) 0:03:55.075 ************ 2025-05-26 03:36:33.444519 | orchestrator | ok: [testbed-manager] 2025-05-26 03:36:33.445221 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:36:33.446666 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:36:33.448075 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:36:33.448844 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:36:33.449808 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:36:33.450808 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:36:33.451670 | orchestrator | 2025-05-26 03:36:33.452402 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-26 03:36:33.453261 | orchestrator | Monday 26 May 2025 03:36:33 +0000 (0:00:05.878) 0:04:00.954 ************ 2025-05-26 03:36:33.907099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:36:33.907557 | orchestrator | 2025-05-26 03:36:33.908465 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-26 03:36:33.909463 | orchestrator | Monday 26 May 2025 03:36:33 +0000 (0:00:00.460) 0:04:01.415 ************ 2025-05-26 03:36:33.989588 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-26 03:36:33.990512 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-26 03:36:33.990545 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-26 03:36:34.024522 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:36:34.024906 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-26 03:36:34.067229 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-26 03:36:34.068357 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:36:34.068389 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-26 03:36:34.068402 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-26 03:36:34.101063 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-26 03:36:34.103830 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:36:34.103865 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-26 03:36:34.137108 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:36:34.138355 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-26 03:36:34.139058 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-26 03:36:34.222922 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:36:34.223790 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-26 03:36:34.224327 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:36:34.225179 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-26 03:36:34.225956 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-26 03:36:34.226563 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:36:34.227260 | orchestrator | 2025-05-26 03:36:34.227995 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-26 03:36:34.228871 | orchestrator | Monday 26 May 2025 03:36:34 +0000 (0:00:00.319) 0:04:01.734 ************ 2025-05-26 03:36:34.646293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:36:34.646428 | orchestrator | 2025-05-26 03:36:34.646618 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-26 03:36:34.647758 | orchestrator | Monday 26 May 2025 03:36:34 +0000 (0:00:00.420) 0:04:02.154 ************ 2025-05-26 03:36:34.720796 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-26 03:36:34.762891 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-26 03:36:34.766582 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:36:34.767288 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-26 03:36:34.800074 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:36:34.800816 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-26 03:36:34.844468 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:36:34.844821 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-26 03:36:34.879372 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:36:34.952288 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:36:34.953821 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-26 03:36:34.954409 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:36:34.958833 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-26 03:36:34.959411 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:36:34.960656 | orchestrator | 2025-05-26 03:36:34.961665 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-26 03:36:34.961964 | orchestrator | Monday 26 May 2025 03:36:34 +0000 (0:00:00.309) 0:04:02.464 ************ 2025-05-26 03:36:35.472465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:36:35.472882 | orchestrator | 2025-05-26 03:36:35.473745 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-26 03:36:35.478495 | orchestrator | Monday 26 May 2025 03:36:35 +0000 (0:00:00.518) 0:04:02.982 ************ 2025-05-26 03:37:09.054315 | orchestrator | changed: [testbed-manager] 2025-05-26 03:37:09.054574 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:09.054596 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:09.059033 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:09.059168 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:09.059186 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:09.059197 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:09.059278 | orchestrator | 2025-05-26 03:37:09.062225 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-26 03:37:09.062938 | orchestrator | Monday 26 May 2025 03:37:09 +0000 (0:00:33.576) 0:04:36.558 ************ 2025-05-26 03:37:16.885116 | orchestrator | changed: [testbed-manager] 2025-05-26 03:37:16.886468 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:16.889448 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:16.889800 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:16.893916 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:16.893940 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:16.893953 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:16.893965 | orchestrator | 2025-05-26 03:37:16.893978 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-26 03:37:16.893991 | orchestrator | Monday 26 May 2025 03:37:16 +0000 (0:00:07.834) 0:04:44.393 ************ 2025-05-26 03:37:24.091697 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:24.092234 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:24.093556 | orchestrator | changed: [testbed-manager] 2025-05-26 03:37:24.096117 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:24.097268 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:24.098095 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:24.099534 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:24.100318 | orchestrator | 2025-05-26 03:37:24.100712 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-26 03:37:24.101764 | orchestrator | Monday 26 May 2025 03:37:24 +0000 (0:00:07.208) 0:04:51.601 ************ 2025-05-26 03:37:25.816993 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:25.817102 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:37:25.819833 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:37:25.821125 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:37:25.823115 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:37:25.823140 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:37:25.823153 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:37:25.823452 | orchestrator | 2025-05-26 03:37:25.824810 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-26 03:37:25.825071 | orchestrator | Monday 26 May 2025 03:37:25 +0000 (0:00:01.723) 0:04:53.324 ************ 2025-05-26 03:37:31.256390 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:31.256536 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:31.256643 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:31.257605 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:31.258224 | orchestrator | changed: [testbed-manager] 2025-05-26 03:37:31.258788 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:31.259315 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:31.259987 | orchestrator | 2025-05-26 03:37:31.261762 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-26 03:37:31.262382 | orchestrator | Monday 26 May 2025 03:37:31 +0000 (0:00:05.439) 0:04:58.764 ************ 2025-05-26 03:37:31.669805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:37:31.670419 | orchestrator | 2025-05-26 03:37:31.671530 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-26 03:37:31.676581 | orchestrator | Monday 26 May 2025 03:37:31 +0000 (0:00:00.415) 0:04:59.179 ************ 2025-05-26 03:37:32.390480 | orchestrator | changed: [testbed-manager] 2025-05-26 03:37:32.390931 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:32.392028 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:32.393288 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:32.393834 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:32.394486 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:32.394981 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:32.395632 | orchestrator | 2025-05-26 03:37:32.396135 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-26 03:37:32.396602 | orchestrator | Monday 26 May 2025 03:37:32 +0000 (0:00:00.718) 0:04:59.898 ************ 2025-05-26 03:37:34.014095 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:34.016953 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:37:34.018726 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:37:34.018789 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:37:34.019324 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:37:34.019988 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:37:34.020549 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:37:34.021037 | orchestrator | 2025-05-26 03:37:34.021852 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-26 03:37:34.022178 | orchestrator | Monday 26 May 2025 03:37:34 +0000 (0:00:01.624) 0:05:01.523 ************ 2025-05-26 03:37:34.836969 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:34.837080 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:34.837095 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:34.837703 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:34.838922 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:34.839874 | orchestrator | changed: [testbed-manager] 2025-05-26 03:37:34.841409 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:34.842530 | orchestrator | 2025-05-26 03:37:34.842558 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-26 03:37:34.843513 | orchestrator | Monday 26 May 2025 03:37:34 +0000 (0:00:00.818) 0:05:02.341 ************ 2025-05-26 03:37:34.896257 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:37:34.929175 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:37:34.961141 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:37:34.992191 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:37:35.022966 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:37:35.110518 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:37:35.110610 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:37:35.110625 | orchestrator | 2025-05-26 03:37:35.110637 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-26 03:37:35.110650 | orchestrator | Monday 26 May 2025 03:37:35 +0000 (0:00:00.271) 0:05:02.613 ************ 2025-05-26 03:37:35.209146 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:37:35.244313 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:37:35.280724 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:37:35.310139 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:37:35.483413 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:37:35.485327 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:37:35.487035 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:37:35.488440 | orchestrator | 2025-05-26 03:37:35.489646 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-26 03:37:35.490410 | orchestrator | Monday 26 May 2025 03:37:35 +0000 (0:00:00.378) 0:05:02.991 ************ 2025-05-26 03:37:35.583570 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:35.616640 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:37:35.650654 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:37:35.683112 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:37:35.745156 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:37:35.745858 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:37:35.746884 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:37:35.749504 | orchestrator | 2025-05-26 03:37:35.750375 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-26 03:37:35.751012 | orchestrator | Monday 26 May 2025 03:37:35 +0000 (0:00:00.263) 0:05:03.256 ************ 2025-05-26 03:37:35.854963 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:37:35.898569 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:37:35.932739 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:37:35.963835 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:37:36.034331 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:37:36.034418 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:37:36.035108 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:37:36.035734 | orchestrator | 2025-05-26 03:37:36.036568 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-26 03:37:36.037101 | orchestrator | Monday 26 May 2025 03:37:36 +0000 (0:00:00.288) 0:05:03.544 ************ 2025-05-26 03:37:36.144542 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:36.183432 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:37:36.213881 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:37:36.253144 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:37:36.326138 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:37:36.327025 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:37:36.327521 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:37:36.328156 | orchestrator | 2025-05-26 03:37:36.328621 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-26 03:37:36.329221 | orchestrator | Monday 26 May 2025 03:37:36 +0000 (0:00:00.293) 0:05:03.838 ************ 2025-05-26 03:37:36.512158 | orchestrator | ok: [testbed-manager] =>  2025-05-26 03:37:36.512488 | orchestrator |  docker_version: 5:27.5.1 2025-05-26 03:37:36.545927 | orchestrator | ok: [testbed-node-3] =>  2025-05-26 03:37:36.546413 | orchestrator |  docker_version: 5:27.5.1 2025-05-26 03:37:36.584733 | orchestrator | ok: [testbed-node-4] =>  2025-05-26 03:37:36.584810 | orchestrator |  docker_version: 5:27.5.1 2025-05-26 03:37:36.620642 | orchestrator | ok: [testbed-node-5] =>  2025-05-26 03:37:36.620721 | orchestrator |  docker_version: 5:27.5.1 2025-05-26 03:37:36.659327 | orchestrator | ok: [testbed-node-0] =>  2025-05-26 03:37:36.660092 | orchestrator |  docker_version: 5:27.5.1 2025-05-26 03:37:36.747221 | orchestrator | ok: [testbed-node-1] =>  2025-05-26 03:37:36.748997 | orchestrator |  docker_version: 5:27.5.1 2025-05-26 03:37:36.749261 | orchestrator | ok: [testbed-node-2] =>  2025-05-26 03:37:36.751642 | orchestrator |  docker_version: 5:27.5.1 2025-05-26 03:37:36.755036 | orchestrator | 2025-05-26 03:37:36.761187 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-26 03:37:36.761212 | orchestrator | Monday 26 May 2025 03:37:36 +0000 (0:00:00.414) 0:05:04.252 ************ 2025-05-26 03:37:36.824707 | orchestrator | ok: [testbed-manager] =>  2025-05-26 03:37:36.824832 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-26 03:37:36.858231 | orchestrator | ok: [testbed-node-3] =>  2025-05-26 03:37:36.858268 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-26 03:37:36.893752 | orchestrator | ok: [testbed-node-4] =>  2025-05-26 03:37:36.893831 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-26 03:37:36.924278 | orchestrator | ok: [testbed-node-5] =>  2025-05-26 03:37:36.924455 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-26 03:37:36.960927 | orchestrator | ok: [testbed-node-0] =>  2025-05-26 03:37:36.961017 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-26 03:37:37.031152 | orchestrator | ok: [testbed-node-1] =>  2025-05-26 03:37:37.031615 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-26 03:37:37.032151 | orchestrator | ok: [testbed-node-2] =>  2025-05-26 03:37:37.032535 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-26 03:37:37.033213 | orchestrator | 2025-05-26 03:37:37.033647 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-26 03:37:37.036995 | orchestrator | Monday 26 May 2025 03:37:37 +0000 (0:00:00.289) 0:05:04.542 ************ 2025-05-26 03:37:37.138271 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:37:37.186868 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:37:37.221425 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:37:37.256018 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:37:37.314308 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:37:37.317872 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:37:37.317918 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:37:37.317932 | orchestrator | 2025-05-26 03:37:37.318352 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-26 03:37:37.319431 | orchestrator | Monday 26 May 2025 03:37:37 +0000 (0:00:00.282) 0:05:04.824 ************ 2025-05-26 03:37:37.382392 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:37:37.421543 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:37:37.466735 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:37:37.519870 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:37:37.552418 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:37:37.608918 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:37:37.609016 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:37:37.609263 | orchestrator | 2025-05-26 03:37:37.609473 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-26 03:37:37.610014 | orchestrator | Monday 26 May 2025 03:37:37 +0000 (0:00:00.295) 0:05:05.119 ************ 2025-05-26 03:37:38.000620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:37:38.000765 | orchestrator | 2025-05-26 03:37:38.000783 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-26 03:37:38.004497 | orchestrator | Monday 26 May 2025 03:37:37 +0000 (0:00:00.390) 0:05:05.510 ************ 2025-05-26 03:37:38.821903 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:38.822011 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:37:38.822268 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:37:38.823181 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:37:38.824047 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:37:38.824828 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:37:38.825468 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:37:38.826506 | orchestrator | 2025-05-26 03:37:38.827137 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-26 03:37:38.827772 | orchestrator | Monday 26 May 2025 03:37:38 +0000 (0:00:00.819) 0:05:06.329 ************ 2025-05-26 03:37:41.514285 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:37:41.515317 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:37:41.515353 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:37:41.516201 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:41.516620 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:37:41.517234 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:37:41.517898 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:37:41.518377 | orchestrator | 2025-05-26 03:37:41.518976 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-26 03:37:41.519495 | orchestrator | Monday 26 May 2025 03:37:41 +0000 (0:00:02.694) 0:05:09.024 ************ 2025-05-26 03:37:41.588151 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-26 03:37:41.588226 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-26 03:37:41.823600 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-26 03:37:41.823792 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-26 03:37:41.823810 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-26 03:37:41.823822 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-26 03:37:41.895488 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:37:41.897801 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-26 03:37:41.897848 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-26 03:37:41.897861 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-26 03:37:41.972006 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:37:41.972496 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-26 03:37:41.974206 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-26 03:37:41.975131 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-26 03:37:42.047622 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:37:42.048360 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-26 03:37:42.049934 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-26 03:37:42.054608 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-26 03:37:42.117417 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:37:42.117900 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-26 03:37:42.120980 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-26 03:37:42.249782 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:37:42.252816 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-26 03:37:42.253682 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:37:42.254727 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-26 03:37:42.256138 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-26 03:37:42.256781 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-26 03:37:42.257675 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:37:42.258523 | orchestrator | 2025-05-26 03:37:42.260395 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-26 03:37:42.261113 | orchestrator | Monday 26 May 2025 03:37:42 +0000 (0:00:00.733) 0:05:09.757 ************ 2025-05-26 03:37:48.346322 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:48.350341 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:48.350424 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:48.351035 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:48.352035 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:48.352354 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:48.353994 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:48.354851 | orchestrator | 2025-05-26 03:37:48.355468 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-26 03:37:48.356297 | orchestrator | Monday 26 May 2025 03:37:48 +0000 (0:00:06.097) 0:05:15.855 ************ 2025-05-26 03:37:49.435494 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:49.439104 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:49.439197 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:49.439737 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:49.441315 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:49.442327 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:49.444171 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:49.445783 | orchestrator | 2025-05-26 03:37:49.446855 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-26 03:37:49.447924 | orchestrator | Monday 26 May 2025 03:37:49 +0000 (0:00:01.087) 0:05:16.942 ************ 2025-05-26 03:37:57.024077 | orchestrator | ok: [testbed-manager] 2025-05-26 03:37:57.026919 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:37:57.030178 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:37:57.030220 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:37:57.030232 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:37:57.030245 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:37:57.030298 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:37:57.031476 | orchestrator | 2025-05-26 03:37:57.032123 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-26 03:37:57.032330 | orchestrator | Monday 26 May 2025 03:37:57 +0000 (0:00:07.589) 0:05:24.531 ************ 2025-05-26 03:38:00.404234 | orchestrator | changed: [testbed-manager] 2025-05-26 03:38:00.407910 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:00.407961 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:00.408167 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:00.408960 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:00.409886 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:00.410757 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:00.411068 | orchestrator | 2025-05-26 03:38:00.411923 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-26 03:38:00.412836 | orchestrator | Monday 26 May 2025 03:38:00 +0000 (0:00:03.379) 0:05:27.911 ************ 2025-05-26 03:38:01.929110 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:01.929299 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:01.931200 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:01.933854 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:01.934561 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:01.935770 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:01.936535 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:01.937552 | orchestrator | 2025-05-26 03:38:01.939026 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-26 03:38:01.939583 | orchestrator | Monday 26 May 2025 03:38:01 +0000 (0:00:01.525) 0:05:29.437 ************ 2025-05-26 03:38:03.319619 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:03.320785 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:03.324537 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:03.324629 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:03.324669 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:03.326444 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:03.327301 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:03.328138 | orchestrator | 2025-05-26 03:38:03.328870 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-26 03:38:03.330372 | orchestrator | Monday 26 May 2025 03:38:03 +0000 (0:00:01.387) 0:05:30.825 ************ 2025-05-26 03:38:03.552600 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:38:03.610528 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:38:03.680539 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:38:03.743925 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:38:03.935730 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:38:03.936694 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:38:03.937599 | orchestrator | changed: [testbed-manager] 2025-05-26 03:38:03.938890 | orchestrator | 2025-05-26 03:38:03.940336 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-26 03:38:03.941492 | orchestrator | Monday 26 May 2025 03:38:03 +0000 (0:00:00.618) 0:05:31.443 ************ 2025-05-26 03:38:13.222206 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:13.222332 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:13.222986 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:13.226229 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:13.227451 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:13.228461 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:13.229530 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:13.230310 | orchestrator | 2025-05-26 03:38:13.230712 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-26 03:38:13.231073 | orchestrator | Monday 26 May 2025 03:38:13 +0000 (0:00:09.286) 0:05:40.729 ************ 2025-05-26 03:38:14.261571 | orchestrator | changed: [testbed-manager] 2025-05-26 03:38:14.261703 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:14.261719 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:14.264128 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:14.265583 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:14.266088 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:14.267100 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:14.267463 | orchestrator | 2025-05-26 03:38:14.268271 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-26 03:38:14.268725 | orchestrator | Monday 26 May 2025 03:38:14 +0000 (0:00:01.035) 0:05:41.765 ************ 2025-05-26 03:38:22.472279 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:22.473027 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:22.473535 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:22.475990 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:22.476254 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:22.476683 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:22.477482 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:22.478257 | orchestrator | 2025-05-26 03:38:22.479052 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-26 03:38:22.479438 | orchestrator | Monday 26 May 2025 03:38:22 +0000 (0:00:08.213) 0:05:49.979 ************ 2025-05-26 03:38:33.306263 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:33.306385 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:33.306402 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:33.306414 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:33.306483 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:33.306666 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:33.307200 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:33.307819 | orchestrator | 2025-05-26 03:38:33.308586 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-26 03:38:33.309440 | orchestrator | Monday 26 May 2025 03:38:33 +0000 (0:00:10.831) 0:06:00.810 ************ 2025-05-26 03:38:33.731330 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-26 03:38:33.731520 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-26 03:38:34.534103 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-26 03:38:34.534424 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-26 03:38:34.534888 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-26 03:38:34.535610 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-26 03:38:34.536919 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-26 03:38:34.537162 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-26 03:38:34.538111 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-26 03:38:34.538496 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-26 03:38:34.539453 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-26 03:38:34.539979 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-26 03:38:34.540680 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-26 03:38:34.541135 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-26 03:38:34.543355 | orchestrator | 2025-05-26 03:38:34.543379 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-26 03:38:34.543393 | orchestrator | Monday 26 May 2025 03:38:34 +0000 (0:00:01.230) 0:06:02.041 ************ 2025-05-26 03:38:34.672185 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:38:34.735391 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:38:34.800510 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:38:34.865950 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:38:34.926500 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:38:35.039106 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:38:35.039250 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:38:35.039268 | orchestrator | 2025-05-26 03:38:35.040228 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-26 03:38:35.040436 | orchestrator | Monday 26 May 2025 03:38:35 +0000 (0:00:00.507) 0:06:02.549 ************ 2025-05-26 03:38:39.008591 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:39.008833 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:39.008900 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:39.010982 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:39.013109 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:39.014006 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:39.015194 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:39.016454 | orchestrator | 2025-05-26 03:38:39.017723 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-26 03:38:39.018863 | orchestrator | Monday 26 May 2025 03:38:39 +0000 (0:00:03.966) 0:06:06.516 ************ 2025-05-26 03:38:39.136795 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:38:39.208359 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:38:39.274677 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:38:39.337953 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:38:39.406995 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:38:39.496581 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:38:39.498184 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:38:39.499215 | orchestrator | 2025-05-26 03:38:39.500034 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-26 03:38:39.501215 | orchestrator | Monday 26 May 2025 03:38:39 +0000 (0:00:00.489) 0:06:07.005 ************ 2025-05-26 03:38:39.564460 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-26 03:38:39.645976 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-26 03:38:39.648448 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-26 03:38:39.649642 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-26 03:38:39.713244 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:38:39.714351 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-26 03:38:39.715938 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-26 03:38:39.778204 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:38:39.779742 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-26 03:38:39.781304 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-26 03:38:39.854481 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:38:39.854686 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-26 03:38:39.855683 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-26 03:38:39.918196 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:38:39.919157 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-26 03:38:39.920074 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-26 03:38:40.019305 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:38:40.019486 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:38:40.021173 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-26 03:38:40.024720 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-26 03:38:40.026595 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:38:40.028013 | orchestrator | 2025-05-26 03:38:40.029242 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-26 03:38:40.029854 | orchestrator | Monday 26 May 2025 03:38:40 +0000 (0:00:00.521) 0:06:07.526 ************ 2025-05-26 03:38:40.154923 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:38:40.218402 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:38:40.294548 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:38:40.357215 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:38:40.420983 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:38:40.517204 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:38:40.517664 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:38:40.518830 | orchestrator | 2025-05-26 03:38:40.519815 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-26 03:38:40.520931 | orchestrator | Monday 26 May 2025 03:38:40 +0000 (0:00:00.500) 0:06:08.027 ************ 2025-05-26 03:38:40.642394 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:38:40.712834 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:38:40.774765 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:38:40.838494 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:38:40.906388 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:38:41.004175 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:38:41.004470 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:38:41.006383 | orchestrator | 2025-05-26 03:38:41.007526 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-26 03:38:41.008659 | orchestrator | Monday 26 May 2025 03:38:40 +0000 (0:00:00.486) 0:06:08.513 ************ 2025-05-26 03:38:41.305020 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:38:41.369034 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:38:41.430449 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:38:41.510752 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:38:41.574469 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:38:41.685998 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:38:41.686557 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:38:41.688031 | orchestrator | 2025-05-26 03:38:41.689192 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-26 03:38:41.690168 | orchestrator | Monday 26 May 2025 03:38:41 +0000 (0:00:00.682) 0:06:09.195 ************ 2025-05-26 03:38:43.399238 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:43.400070 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:38:43.402367 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:38:43.404477 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:38:43.405262 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:38:43.406500 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:38:43.407414 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:38:43.408762 | orchestrator | 2025-05-26 03:38:43.409508 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-26 03:38:43.410754 | orchestrator | Monday 26 May 2025 03:38:43 +0000 (0:00:01.711) 0:06:10.906 ************ 2025-05-26 03:38:44.235014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:38:44.235136 | orchestrator | 2025-05-26 03:38:44.236309 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-26 03:38:44.237904 | orchestrator | Monday 26 May 2025 03:38:44 +0000 (0:00:00.836) 0:06:11.742 ************ 2025-05-26 03:38:44.636587 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:45.216445 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:45.217863 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:45.218258 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:45.218281 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:45.218293 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:45.219310 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:45.221088 | orchestrator | 2025-05-26 03:38:45.224516 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-26 03:38:45.224555 | orchestrator | Monday 26 May 2025 03:38:45 +0000 (0:00:00.983) 0:06:12.726 ************ 2025-05-26 03:38:45.645486 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:46.068093 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:46.068201 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:46.069464 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:46.070575 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:46.071294 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:46.072407 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:46.072920 | orchestrator | 2025-05-26 03:38:46.074099 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-26 03:38:46.075570 | orchestrator | Monday 26 May 2025 03:38:46 +0000 (0:00:00.849) 0:06:13.576 ************ 2025-05-26 03:38:47.448231 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:47.448969 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:47.450210 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:47.451574 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:47.453029 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:47.454065 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:47.455092 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:47.456106 | orchestrator | 2025-05-26 03:38:47.457010 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-26 03:38:47.458165 | orchestrator | Monday 26 May 2025 03:38:47 +0000 (0:00:01.378) 0:06:14.954 ************ 2025-05-26 03:38:47.579039 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:38:48.817801 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:38:48.817956 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:38:48.818278 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:38:48.818816 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:38:48.819650 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:38:48.819672 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:38:48.820044 | orchestrator | 2025-05-26 03:38:48.820884 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-26 03:38:48.822353 | orchestrator | Monday 26 May 2025 03:38:48 +0000 (0:00:01.371) 0:06:16.326 ************ 2025-05-26 03:38:50.121950 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:50.122130 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:50.122250 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:50.122944 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:50.123974 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:50.124386 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:50.125242 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:50.125793 | orchestrator | 2025-05-26 03:38:50.126911 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-26 03:38:50.127846 | orchestrator | Monday 26 May 2025 03:38:50 +0000 (0:00:01.301) 0:06:17.628 ************ 2025-05-26 03:38:51.696868 | orchestrator | changed: [testbed-manager] 2025-05-26 03:38:51.697582 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:38:51.698684 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:38:51.699908 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:38:51.700821 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:38:51.702129 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:38:51.702674 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:38:51.703176 | orchestrator | 2025-05-26 03:38:51.703775 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-26 03:38:51.704107 | orchestrator | Monday 26 May 2025 03:38:51 +0000 (0:00:01.576) 0:06:19.204 ************ 2025-05-26 03:38:52.563815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:38:52.564432 | orchestrator | 2025-05-26 03:38:52.565261 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-26 03:38:52.566284 | orchestrator | Monday 26 May 2025 03:38:52 +0000 (0:00:00.869) 0:06:20.074 ************ 2025-05-26 03:38:53.956395 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:53.956957 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:38:53.958236 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:38:53.960207 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:38:53.961174 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:38:53.962109 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:38:53.963261 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:38:53.964584 | orchestrator | 2025-05-26 03:38:53.965669 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-26 03:38:53.966430 | orchestrator | Monday 26 May 2025 03:38:53 +0000 (0:00:01.389) 0:06:21.463 ************ 2025-05-26 03:38:55.099586 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:55.099753 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:38:55.100191 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:38:55.100931 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:38:55.101293 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:38:55.103430 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:38:55.103977 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:38:55.104386 | orchestrator | 2025-05-26 03:38:55.105164 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-26 03:38:55.105251 | orchestrator | Monday 26 May 2025 03:38:55 +0000 (0:00:01.145) 0:06:22.609 ************ 2025-05-26 03:38:56.444857 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:56.452320 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:38:56.452356 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:38:56.452369 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:38:56.452403 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:38:56.452414 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:38:56.452469 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:38:56.453088 | orchestrator | 2025-05-26 03:38:56.453169 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-26 03:38:56.453441 | orchestrator | Monday 26 May 2025 03:38:56 +0000 (0:00:01.341) 0:06:23.951 ************ 2025-05-26 03:38:57.593954 | orchestrator | ok: [testbed-manager] 2025-05-26 03:38:57.594197 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:38:57.594315 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:38:57.594660 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:38:57.594681 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:38:57.594953 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:38:57.595187 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:38:57.595486 | orchestrator | 2025-05-26 03:38:57.595506 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-26 03:38:57.595920 | orchestrator | Monday 26 May 2025 03:38:57 +0000 (0:00:01.149) 0:06:25.100 ************ 2025-05-26 03:38:58.920986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:38:58.921099 | orchestrator | 2025-05-26 03:38:58.921117 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-26 03:38:58.922325 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.875) 0:06:25.976 ************ 2025-05-26 03:38:58.924539 | orchestrator | 2025-05-26 03:38:58.924595 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-26 03:38:58.924637 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.037) 0:06:26.013 ************ 2025-05-26 03:38:58.925326 | orchestrator | 2025-05-26 03:38:58.925965 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-26 03:38:58.927034 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.036) 0:06:26.050 ************ 2025-05-26 03:38:58.927408 | orchestrator | 2025-05-26 03:38:58.928083 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-26 03:38:58.928842 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.044) 0:06:26.094 ************ 2025-05-26 03:38:58.929519 | orchestrator | 2025-05-26 03:38:58.929703 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-26 03:38:58.930319 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.037) 0:06:26.132 ************ 2025-05-26 03:38:58.931545 | orchestrator | 2025-05-26 03:38:58.932091 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-26 03:38:58.932790 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.041) 0:06:26.174 ************ 2025-05-26 03:38:58.933131 | orchestrator | 2025-05-26 03:38:58.933750 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-26 03:38:58.934093 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.213) 0:06:26.387 ************ 2025-05-26 03:38:58.934585 | orchestrator | 2025-05-26 03:38:58.935848 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-26 03:38:58.935985 | orchestrator | Monday 26 May 2025 03:38:58 +0000 (0:00:00.039) 0:06:26.427 ************ 2025-05-26 03:39:00.094216 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:00.094771 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:00.095649 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:00.097018 | orchestrator | 2025-05-26 03:39:00.098320 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-26 03:39:00.099350 | orchestrator | Monday 26 May 2025 03:39:00 +0000 (0:00:01.173) 0:06:27.600 ************ 2025-05-26 03:39:01.422251 | orchestrator | changed: [testbed-manager] 2025-05-26 03:39:01.423112 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:01.424428 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:01.425693 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:01.426100 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:01.426642 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:01.427273 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:01.427898 | orchestrator | 2025-05-26 03:39:01.428583 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-26 03:39:01.429085 | orchestrator | Monday 26 May 2025 03:39:01 +0000 (0:00:01.328) 0:06:28.928 ************ 2025-05-26 03:39:02.608813 | orchestrator | changed: [testbed-manager] 2025-05-26 03:39:02.609309 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:02.611005 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:02.611749 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:02.612958 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:02.613544 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:02.614431 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:02.615083 | orchestrator | 2025-05-26 03:39:02.615843 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-26 03:39:02.617324 | orchestrator | Monday 26 May 2025 03:39:02 +0000 (0:00:01.189) 0:06:30.118 ************ 2025-05-26 03:39:02.741849 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:04.841852 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:04.842422 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:04.843421 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:04.844718 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:04.845531 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:04.846638 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:04.847403 | orchestrator | 2025-05-26 03:39:04.848562 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-26 03:39:04.850129 | orchestrator | Monday 26 May 2025 03:39:04 +0000 (0:00:02.229) 0:06:32.348 ************ 2025-05-26 03:39:04.951415 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:04.951655 | orchestrator | 2025-05-26 03:39:04.952765 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-26 03:39:04.954003 | orchestrator | Monday 26 May 2025 03:39:04 +0000 (0:00:00.112) 0:06:32.460 ************ 2025-05-26 03:39:06.208346 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:06.208948 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:06.209970 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:06.211982 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:06.212332 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:06.212734 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:06.213456 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:06.213759 | orchestrator | 2025-05-26 03:39:06.214461 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-26 03:39:06.215506 | orchestrator | Monday 26 May 2025 03:39:06 +0000 (0:00:01.254) 0:06:33.715 ************ 2025-05-26 03:39:06.360242 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:06.427746 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:06.492170 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:39:06.563338 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:39:06.632177 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:39:06.753718 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:39:06.754493 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:39:06.755761 | orchestrator | 2025-05-26 03:39:06.756836 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-26 03:39:06.757939 | orchestrator | Monday 26 May 2025 03:39:06 +0000 (0:00:00.546) 0:06:34.262 ************ 2025-05-26 03:39:07.714175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:39:07.715861 | orchestrator | 2025-05-26 03:39:07.716458 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-26 03:39:07.716544 | orchestrator | Monday 26 May 2025 03:39:07 +0000 (0:00:00.959) 0:06:35.222 ************ 2025-05-26 03:39:08.128343 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:08.550500 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:08.551003 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:08.551797 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:08.552650 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:08.554115 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:08.554140 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:08.554197 | orchestrator | 2025-05-26 03:39:08.554904 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-26 03:39:08.556569 | orchestrator | Monday 26 May 2025 03:39:08 +0000 (0:00:00.838) 0:06:36.060 ************ 2025-05-26 03:39:11.199201 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-26 03:39:11.199414 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-26 03:39:11.200711 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-26 03:39:11.201738 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-26 03:39:11.202717 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-26 03:39:11.204307 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-26 03:39:11.205145 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-26 03:39:11.205766 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-26 03:39:11.206633 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-26 03:39:11.207457 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-26 03:39:11.208147 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-26 03:39:11.209051 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-26 03:39:11.209287 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-26 03:39:11.210373 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-26 03:39:11.211324 | orchestrator | 2025-05-26 03:39:11.212319 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-26 03:39:11.213184 | orchestrator | Monday 26 May 2025 03:39:11 +0000 (0:00:02.646) 0:06:38.706 ************ 2025-05-26 03:39:11.332335 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:11.395235 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:11.458967 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:39:11.528174 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:39:11.588130 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:39:11.692201 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:39:11.693307 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:39:11.694879 | orchestrator | 2025-05-26 03:39:11.696114 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-26 03:39:11.697340 | orchestrator | Monday 26 May 2025 03:39:11 +0000 (0:00:00.493) 0:06:39.200 ************ 2025-05-26 03:39:12.486520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:39:12.487065 | orchestrator | 2025-05-26 03:39:12.488087 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-26 03:39:12.489005 | orchestrator | Monday 26 May 2025 03:39:12 +0000 (0:00:00.795) 0:06:39.995 ************ 2025-05-26 03:39:12.911673 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:12.978344 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:13.552410 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:13.552813 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:13.553648 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:13.554459 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:13.558332 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:13.558764 | orchestrator | 2025-05-26 03:39:13.559567 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-26 03:39:13.560057 | orchestrator | Monday 26 May 2025 03:39:13 +0000 (0:00:01.064) 0:06:41.060 ************ 2025-05-26 03:39:13.964481 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:14.365645 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:14.365755 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:14.366142 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:14.367628 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:14.368409 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:14.369477 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:14.370089 | orchestrator | 2025-05-26 03:39:14.370896 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-26 03:39:14.371692 | orchestrator | Monday 26 May 2025 03:39:14 +0000 (0:00:00.813) 0:06:41.874 ************ 2025-05-26 03:39:14.496683 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:14.566764 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:14.631726 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:39:14.710651 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:39:14.779204 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:39:14.872582 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:39:14.873345 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:39:14.877027 | orchestrator | 2025-05-26 03:39:14.877080 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-26 03:39:14.877095 | orchestrator | Monday 26 May 2025 03:39:14 +0000 (0:00:00.506) 0:06:42.380 ************ 2025-05-26 03:39:16.339244 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:16.340134 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:16.340734 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:16.341924 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:16.343903 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:16.344853 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:16.345980 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:16.346955 | orchestrator | 2025-05-26 03:39:16.348146 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-26 03:39:16.348407 | orchestrator | Monday 26 May 2025 03:39:16 +0000 (0:00:01.465) 0:06:43.846 ************ 2025-05-26 03:39:16.482845 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:16.547754 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:16.615227 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:39:16.680089 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:39:16.741251 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:39:17.006318 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:39:17.007334 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:39:17.008136 | orchestrator | 2025-05-26 03:39:17.008586 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-26 03:39:17.009376 | orchestrator | Monday 26 May 2025 03:39:17 +0000 (0:00:00.669) 0:06:44.516 ************ 2025-05-26 03:39:24.591850 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:24.591994 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:24.592012 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:24.593239 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:24.594309 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:24.595171 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:24.596016 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:24.597031 | orchestrator | 2025-05-26 03:39:24.598014 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-26 03:39:24.599087 | orchestrator | Monday 26 May 2025 03:39:24 +0000 (0:00:07.580) 0:06:52.096 ************ 2025-05-26 03:39:25.914126 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:25.916096 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:25.917302 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:25.918383 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:25.919484 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:25.920337 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:25.921047 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:25.921963 | orchestrator | 2025-05-26 03:39:25.922642 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-26 03:39:25.923523 | orchestrator | Monday 26 May 2025 03:39:25 +0000 (0:00:01.324) 0:06:53.420 ************ 2025-05-26 03:39:27.638956 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:27.639885 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:27.642149 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:27.643223 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:27.644190 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:27.645221 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:27.646085 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:27.646749 | orchestrator | 2025-05-26 03:39:27.647516 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-26 03:39:27.648276 | orchestrator | Monday 26 May 2025 03:39:27 +0000 (0:00:01.726) 0:06:55.146 ************ 2025-05-26 03:39:29.428041 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:29.428149 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:29.428164 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:29.428176 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:29.428187 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:29.428942 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:29.428969 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:29.428982 | orchestrator | 2025-05-26 03:39:29.428995 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-26 03:39:29.433189 | orchestrator | Monday 26 May 2025 03:39:29 +0000 (0:00:01.787) 0:06:56.934 ************ 2025-05-26 03:39:29.847632 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:30.322789 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:30.323547 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:30.324113 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:30.324903 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:30.326405 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:30.327340 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:30.328573 | orchestrator | 2025-05-26 03:39:30.329124 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-26 03:39:30.329915 | orchestrator | Monday 26 May 2025 03:39:30 +0000 (0:00:00.898) 0:06:57.833 ************ 2025-05-26 03:39:30.455232 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:30.526967 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:30.590717 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:39:30.653251 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:39:30.727524 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:39:31.108579 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:39:31.109191 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:39:31.109709 | orchestrator | 2025-05-26 03:39:31.110353 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-26 03:39:31.111056 | orchestrator | Monday 26 May 2025 03:39:31 +0000 (0:00:00.785) 0:06:58.618 ************ 2025-05-26 03:39:31.256664 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:31.320069 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:31.395659 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:39:31.457363 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:39:31.518283 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:39:31.620691 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:39:31.621787 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:39:31.622835 | orchestrator | 2025-05-26 03:39:31.623927 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-26 03:39:31.624680 | orchestrator | Monday 26 May 2025 03:39:31 +0000 (0:00:00.513) 0:06:59.131 ************ 2025-05-26 03:39:31.750123 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:32.000249 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:32.063267 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:32.128223 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:32.198404 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:32.316845 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:32.317339 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:32.318270 | orchestrator | 2025-05-26 03:39:32.319240 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-26 03:39:32.320019 | orchestrator | Monday 26 May 2025 03:39:32 +0000 (0:00:00.693) 0:06:59.825 ************ 2025-05-26 03:39:32.451089 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:32.522909 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:32.578857 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:32.648623 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:32.713188 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:32.820988 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:32.821183 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:32.822122 | orchestrator | 2025-05-26 03:39:32.822907 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-26 03:39:32.823651 | orchestrator | Monday 26 May 2025 03:39:32 +0000 (0:00:00.504) 0:07:00.330 ************ 2025-05-26 03:39:32.958329 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:33.026204 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:33.099164 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:33.172423 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:33.238131 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:33.366867 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:33.368196 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:33.369310 | orchestrator | 2025-05-26 03:39:33.370882 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-26 03:39:33.372241 | orchestrator | Monday 26 May 2025 03:39:33 +0000 (0:00:00.546) 0:07:00.876 ************ 2025-05-26 03:39:39.075298 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:39.076060 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:39.077519 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:39.078909 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:39.079180 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:39.080007 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:39.080710 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:39.081368 | orchestrator | 2025-05-26 03:39:39.081897 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-26 03:39:39.082519 | orchestrator | Monday 26 May 2025 03:39:39 +0000 (0:00:05.707) 0:07:06.584 ************ 2025-05-26 03:39:39.205253 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:39:39.274113 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:39:39.337022 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:39:39.397105 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:39:39.660989 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:39:39.793703 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:39:39.793902 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:39:39.794693 | orchestrator | 2025-05-26 03:39:39.795565 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-26 03:39:39.796127 | orchestrator | Monday 26 May 2025 03:39:39 +0000 (0:00:00.718) 0:07:07.303 ************ 2025-05-26 03:39:40.583373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:39:40.584189 | orchestrator | 2025-05-26 03:39:40.584560 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-26 03:39:40.585086 | orchestrator | Monday 26 May 2025 03:39:40 +0000 (0:00:00.788) 0:07:08.092 ************ 2025-05-26 03:39:42.332437 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:42.333430 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:42.334315 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:42.335913 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:42.337216 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:42.338226 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:42.338728 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:42.339776 | orchestrator | 2025-05-26 03:39:42.340706 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-26 03:39:42.341072 | orchestrator | Monday 26 May 2025 03:39:42 +0000 (0:00:01.745) 0:07:09.837 ************ 2025-05-26 03:39:43.492967 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:43.493070 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:43.493939 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:43.493981 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:43.493994 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:43.494006 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:43.494102 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:43.495030 | orchestrator | 2025-05-26 03:39:43.495721 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-26 03:39:43.497636 | orchestrator | Monday 26 May 2025 03:39:43 +0000 (0:00:01.160) 0:07:10.997 ************ 2025-05-26 03:39:43.977987 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:44.044332 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:44.118868 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:44.564652 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:44.565499 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:44.566084 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:44.568007 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:44.568861 | orchestrator | 2025-05-26 03:39:44.569686 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-26 03:39:44.570359 | orchestrator | Monday 26 May 2025 03:39:44 +0000 (0:00:01.074) 0:07:12.072 ************ 2025-05-26 03:39:46.284988 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-26 03:39:46.285165 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-26 03:39:46.285793 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-26 03:39:46.286795 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-26 03:39:46.287641 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-26 03:39:46.288858 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-26 03:39:46.289383 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-26 03:39:46.289703 | orchestrator | 2025-05-26 03:39:46.290318 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-26 03:39:46.290839 | orchestrator | Monday 26 May 2025 03:39:46 +0000 (0:00:01.719) 0:07:13.792 ************ 2025-05-26 03:39:47.047201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:39:47.047364 | orchestrator | 2025-05-26 03:39:47.050924 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-26 03:39:47.050958 | orchestrator | Monday 26 May 2025 03:39:47 +0000 (0:00:00.762) 0:07:14.554 ************ 2025-05-26 03:39:56.043819 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:39:56.044861 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:39:56.048924 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:39:56.049033 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:39:56.049149 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:39:56.049881 | orchestrator | changed: [testbed-manager] 2025-05-26 03:39:56.050907 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:39:56.051686 | orchestrator | 2025-05-26 03:39:56.052727 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-26 03:39:56.053219 | orchestrator | Monday 26 May 2025 03:39:56 +0000 (0:00:08.990) 0:07:23.545 ************ 2025-05-26 03:39:57.813520 | orchestrator | ok: [testbed-manager] 2025-05-26 03:39:57.814846 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:57.817196 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:57.818141 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:57.819485 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:57.820436 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:57.821040 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:57.821683 | orchestrator | 2025-05-26 03:39:57.822281 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-26 03:39:57.823116 | orchestrator | Monday 26 May 2025 03:39:57 +0000 (0:00:01.777) 0:07:25.322 ************ 2025-05-26 03:39:59.207239 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:39:59.208146 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:39:59.208191 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:39:59.208203 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:39:59.208215 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:39:59.210753 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:39:59.210874 | orchestrator | 2025-05-26 03:39:59.210892 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-26 03:39:59.210905 | orchestrator | Monday 26 May 2025 03:39:59 +0000 (0:00:01.391) 0:07:26.714 ************ 2025-05-26 03:40:00.544353 | orchestrator | changed: [testbed-manager] 2025-05-26 03:40:00.545065 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:40:00.546493 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:40:00.549956 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:40:00.549991 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:40:00.550003 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:40:00.550058 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:40:00.550268 | orchestrator | 2025-05-26 03:40:00.551238 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-26 03:40:00.551980 | orchestrator | 2025-05-26 03:40:00.552318 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-26 03:40:00.552830 | orchestrator | Monday 26 May 2025 03:40:00 +0000 (0:00:01.338) 0:07:28.052 ************ 2025-05-26 03:40:00.695938 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:40:00.763516 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:40:00.831613 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:40:00.909009 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:40:00.990111 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:40:01.119227 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:40:01.119921 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:40:01.120766 | orchestrator | 2025-05-26 03:40:01.122718 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-26 03:40:01.122822 | orchestrator | 2025-05-26 03:40:01.123256 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-26 03:40:01.123665 | orchestrator | Monday 26 May 2025 03:40:01 +0000 (0:00:00.576) 0:07:28.629 ************ 2025-05-26 03:40:02.480147 | orchestrator | changed: [testbed-manager] 2025-05-26 03:40:02.480941 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:40:02.482994 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:40:02.483064 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:40:02.485009 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:40:02.486089 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:40:02.487072 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:40:02.487821 | orchestrator | 2025-05-26 03:40:02.489293 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-26 03:40:02.489316 | orchestrator | Monday 26 May 2025 03:40:02 +0000 (0:00:01.359) 0:07:29.989 ************ 2025-05-26 03:40:04.060072 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:04.060483 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:04.061450 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:04.062551 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:04.063338 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:04.064211 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:04.064990 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:04.065621 | orchestrator | 2025-05-26 03:40:04.066646 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-26 03:40:04.067987 | orchestrator | Monday 26 May 2025 03:40:04 +0000 (0:00:01.577) 0:07:31.566 ************ 2025-05-26 03:40:04.201618 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:40:04.265646 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:40:04.334532 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:40:04.416013 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:40:04.484964 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:40:04.879432 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:40:04.879877 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:40:04.881287 | orchestrator | 2025-05-26 03:40:04.884666 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-26 03:40:04.884718 | orchestrator | Monday 26 May 2025 03:40:04 +0000 (0:00:00.823) 0:07:32.389 ************ 2025-05-26 03:40:06.192798 | orchestrator | changed: [testbed-manager] 2025-05-26 03:40:06.194788 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:40:06.195788 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:40:06.197777 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:40:06.198807 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:40:06.200056 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:40:06.200588 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:40:06.201728 | orchestrator | 2025-05-26 03:40:06.202286 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-26 03:40:06.202784 | orchestrator | 2025-05-26 03:40:06.203800 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-26 03:40:06.204184 | orchestrator | Monday 26 May 2025 03:40:06 +0000 (0:00:01.312) 0:07:33.701 ************ 2025-05-26 03:40:07.134485 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:40:07.137238 | orchestrator | 2025-05-26 03:40:07.139942 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-26 03:40:07.139970 | orchestrator | Monday 26 May 2025 03:40:07 +0000 (0:00:00.936) 0:07:34.638 ************ 2025-05-26 03:40:07.576423 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:08.077807 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:08.077916 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:08.077992 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:08.079188 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:08.079215 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:08.080040 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:08.080425 | orchestrator | 2025-05-26 03:40:08.082544 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-26 03:40:08.083148 | orchestrator | Monday 26 May 2025 03:40:08 +0000 (0:00:00.947) 0:07:35.586 ************ 2025-05-26 03:40:09.200032 | orchestrator | changed: [testbed-manager] 2025-05-26 03:40:09.200162 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:40:09.201045 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:40:09.201557 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:40:09.202206 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:40:09.203134 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:40:09.203722 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:40:09.204390 | orchestrator | 2025-05-26 03:40:09.205107 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-26 03:40:09.205334 | orchestrator | Monday 26 May 2025 03:40:09 +0000 (0:00:01.120) 0:07:36.706 ************ 2025-05-26 03:40:10.205788 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 03:40:10.206183 | orchestrator | 2025-05-26 03:40:10.207365 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-26 03:40:10.208676 | orchestrator | Monday 26 May 2025 03:40:10 +0000 (0:00:01.007) 0:07:37.713 ************ 2025-05-26 03:40:10.638680 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:11.107320 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:11.107925 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:11.108859 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:11.109808 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:11.110173 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:11.110829 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:11.111553 | orchestrator | 2025-05-26 03:40:11.112094 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-26 03:40:11.112731 | orchestrator | Monday 26 May 2025 03:40:11 +0000 (0:00:00.901) 0:07:38.614 ************ 2025-05-26 03:40:11.522652 | orchestrator | changed: [testbed-manager] 2025-05-26 03:40:12.210949 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:40:12.211693 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:40:12.212290 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:40:12.213168 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:40:12.214106 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:40:12.215393 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:40:12.216145 | orchestrator | 2025-05-26 03:40:12.217105 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:40:12.217293 | orchestrator | 2025-05-26 03:40:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:40:12.217325 | orchestrator | 2025-05-26 03:40:12 | INFO  | Please wait and do not abort execution. 2025-05-26 03:40:12.218415 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-26 03:40:12.218842 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-26 03:40:12.219230 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-26 03:40:12.219956 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-26 03:40:12.220458 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-26 03:40:12.221027 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-26 03:40:12.221639 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-26 03:40:12.222361 | orchestrator | 2025-05-26 03:40:12.222614 | orchestrator | 2025-05-26 03:40:12.223260 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:40:12.223725 | orchestrator | Monday 26 May 2025 03:40:12 +0000 (0:00:01.104) 0:07:39.719 ************ 2025-05-26 03:40:12.224080 | orchestrator | =============================================================================== 2025-05-26 03:40:12.226618 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.74s 2025-05-26 03:40:12.226642 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.22s 2025-05-26 03:40:12.226653 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.58s 2025-05-26 03:40:12.226679 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.71s 2025-05-26 03:40:12.226741 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.87s 2025-05-26 03:40:12.227063 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.55s 2025-05-26 03:40:12.227432 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.83s 2025-05-26 03:40:12.227946 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.29s 2025-05-26 03:40:12.228643 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.99s 2025-05-26 03:40:12.229158 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.46s 2025-05-26 03:40:12.229364 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.21s 2025-05-26 03:40:12.229862 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.83s 2025-05-26 03:40:12.230426 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.59s 2025-05-26 03:40:12.230594 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.58s 2025-05-26 03:40:12.230926 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.47s 2025-05-26 03:40:12.231265 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.21s 2025-05-26 03:40:12.232014 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.10s 2025-05-26 03:40:12.232511 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.88s 2025-05-26 03:40:12.233458 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.71s 2025-05-26 03:40:12.233761 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.67s 2025-05-26 03:40:12.893035 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-26 03:40:12.893134 | orchestrator | + osism apply network 2025-05-26 03:40:14.986381 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:40:14.987326 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:40:14.987369 | orchestrator | Registering Redlock._release_script 2025-05-26 03:40:15.048950 | orchestrator | 2025-05-26 03:40:15 | INFO  | Task 390ae5c4-e55e-450b-b35a-025275b24eb2 (network) was prepared for execution. 2025-05-26 03:40:15.049051 | orchestrator | 2025-05-26 03:40:15 | INFO  | It takes a moment until task 390ae5c4-e55e-450b-b35a-025275b24eb2 (network) has been started and output is visible here. 2025-05-26 03:40:19.276195 | orchestrator | 2025-05-26 03:40:19.278072 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-26 03:40:19.282707 | orchestrator | 2025-05-26 03:40:19.282903 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-26 03:40:19.282926 | orchestrator | Monday 26 May 2025 03:40:19 +0000 (0:00:00.265) 0:00:00.266 ************ 2025-05-26 03:40:19.422519 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:19.499977 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:19.576621 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:19.652107 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:19.826280 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:19.959922 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:19.960132 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:19.961105 | orchestrator | 2025-05-26 03:40:19.961870 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-26 03:40:19.962754 | orchestrator | Monday 26 May 2025 03:40:19 +0000 (0:00:00.683) 0:00:00.949 ************ 2025-05-26 03:40:21.139494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 03:40:21.139920 | orchestrator | 2025-05-26 03:40:21.141066 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-26 03:40:21.144739 | orchestrator | Monday 26 May 2025 03:40:21 +0000 (0:00:01.178) 0:00:02.128 ************ 2025-05-26 03:40:23.031550 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:23.031769 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:23.031859 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:23.031876 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:23.033532 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:23.033932 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:23.035363 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:23.036157 | orchestrator | 2025-05-26 03:40:23.037253 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-26 03:40:23.039928 | orchestrator | Monday 26 May 2025 03:40:23 +0000 (0:00:01.895) 0:00:04.023 ************ 2025-05-26 03:40:24.711986 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:24.712105 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:24.712263 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:24.712281 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:24.712710 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:24.713091 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:24.715756 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:24.715778 | orchestrator | 2025-05-26 03:40:24.716448 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-26 03:40:24.717309 | orchestrator | Monday 26 May 2025 03:40:24 +0000 (0:00:01.675) 0:00:05.699 ************ 2025-05-26 03:40:25.212794 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-26 03:40:25.213211 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-26 03:40:25.687401 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-26 03:40:25.689054 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-26 03:40:25.690459 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-26 03:40:25.691529 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-26 03:40:25.692270 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-26 03:40:25.693235 | orchestrator | 2025-05-26 03:40:25.694149 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-26 03:40:25.695100 | orchestrator | Monday 26 May 2025 03:40:25 +0000 (0:00:00.980) 0:00:06.679 ************ 2025-05-26 03:40:29.190374 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-26 03:40:29.190731 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-26 03:40:29.192387 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-26 03:40:29.193767 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-26 03:40:29.194688 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-26 03:40:29.195971 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-26 03:40:29.197010 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-26 03:40:29.197749 | orchestrator | 2025-05-26 03:40:29.198771 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-26 03:40:29.199401 | orchestrator | Monday 26 May 2025 03:40:29 +0000 (0:00:03.499) 0:00:10.179 ************ 2025-05-26 03:40:30.817601 | orchestrator | changed: [testbed-manager] 2025-05-26 03:40:30.817879 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:40:30.819168 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:40:30.819588 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:40:30.823025 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:40:30.823059 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:40:30.823071 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:40:30.823084 | orchestrator | 2025-05-26 03:40:30.823097 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-26 03:40:30.823111 | orchestrator | Monday 26 May 2025 03:40:30 +0000 (0:00:01.626) 0:00:11.805 ************ 2025-05-26 03:40:32.461437 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-26 03:40:32.462844 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-26 03:40:32.466614 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-26 03:40:32.466647 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-26 03:40:32.467796 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-26 03:40:32.469180 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-26 03:40:32.470081 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-26 03:40:32.470874 | orchestrator | 2025-05-26 03:40:32.471888 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-26 03:40:32.472654 | orchestrator | Monday 26 May 2025 03:40:32 +0000 (0:00:01.646) 0:00:13.452 ************ 2025-05-26 03:40:32.889161 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:33.173679 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:33.592937 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:33.595176 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:33.596962 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:33.596988 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:33.597885 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:33.599053 | orchestrator | 2025-05-26 03:40:33.600602 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-26 03:40:33.601099 | orchestrator | Monday 26 May 2025 03:40:33 +0000 (0:00:01.128) 0:00:14.580 ************ 2025-05-26 03:40:33.755957 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:40:33.837613 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:40:33.920953 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:40:33.997836 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:40:34.076077 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:40:34.221017 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:40:34.221244 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:40:34.221979 | orchestrator | 2025-05-26 03:40:34.222696 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-26 03:40:34.223069 | orchestrator | Monday 26 May 2025 03:40:34 +0000 (0:00:00.632) 0:00:15.213 ************ 2025-05-26 03:40:36.291906 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:36.292424 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:36.293349 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:36.296043 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:36.297051 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:36.299214 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:36.301449 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:36.301936 | orchestrator | 2025-05-26 03:40:36.304403 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-26 03:40:36.305032 | orchestrator | Monday 26 May 2025 03:40:36 +0000 (0:00:02.064) 0:00:17.277 ************ 2025-05-26 03:40:36.552698 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:40:36.644086 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:40:36.721618 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:40:36.813485 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:40:37.209453 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:40:37.210637 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:40:37.211936 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-26 03:40:37.213301 | orchestrator | 2025-05-26 03:40:37.214119 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-26 03:40:37.218179 | orchestrator | Monday 26 May 2025 03:40:37 +0000 (0:00:00.924) 0:00:18.202 ************ 2025-05-26 03:40:39.015209 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:39.016153 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:40:39.017977 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:40:39.018815 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:40:39.019530 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:40:39.024762 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:40:39.025523 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:40:39.026226 | orchestrator | 2025-05-26 03:40:39.026924 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-26 03:40:39.027537 | orchestrator | Monday 26 May 2025 03:40:39 +0000 (0:00:01.797) 0:00:20.000 ************ 2025-05-26 03:40:40.265479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 03:40:40.269859 | orchestrator | 2025-05-26 03:40:40.269896 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-26 03:40:40.269910 | orchestrator | Monday 26 May 2025 03:40:40 +0000 (0:00:01.252) 0:00:21.253 ************ 2025-05-26 03:40:40.789057 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:41.375659 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:41.379125 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:41.379160 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:41.379172 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:41.379184 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:41.379240 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:41.380128 | orchestrator | 2025-05-26 03:40:41.380809 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-26 03:40:41.381429 | orchestrator | Monday 26 May 2025 03:40:41 +0000 (0:00:01.109) 0:00:22.362 ************ 2025-05-26 03:40:41.546168 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:41.637025 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:40:41.725393 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:40:41.812181 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:40:41.898070 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:40:42.028824 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:40:42.028914 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:40:42.029216 | orchestrator | 2025-05-26 03:40:42.030114 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-26 03:40:42.030361 | orchestrator | Monday 26 May 2025 03:40:42 +0000 (0:00:00.654) 0:00:23.016 ************ 2025-05-26 03:40:42.682689 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-26 03:40:42.682824 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-26 03:40:42.682840 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-26 03:40:42.682851 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-26 03:40:42.682863 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-26 03:40:42.682948 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-26 03:40:42.682963 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-26 03:40:42.682974 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-26 03:40:42.770763 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-26 03:40:42.770939 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-26 03:40:43.249632 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-26 03:40:43.254457 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-26 03:40:43.254503 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-26 03:40:43.254517 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-26 03:40:43.254529 | orchestrator | 2025-05-26 03:40:43.254542 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-26 03:40:43.254579 | orchestrator | Monday 26 May 2025 03:40:43 +0000 (0:00:01.220) 0:00:24.236 ************ 2025-05-26 03:40:43.419769 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:40:43.501512 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:40:43.582975 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:40:43.665062 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:40:43.744486 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:40:43.879779 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:40:43.880745 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:40:43.882224 | orchestrator | 2025-05-26 03:40:43.883425 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-26 03:40:43.883781 | orchestrator | Monday 26 May 2025 03:40:43 +0000 (0:00:00.635) 0:00:24.872 ************ 2025-05-26 03:40:47.596124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-2, testbed-node-0, testbed-node-4, testbed-node-1, testbed-node-3, testbed-node-5 2025-05-26 03:40:47.597914 | orchestrator | 2025-05-26 03:40:47.599673 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-26 03:40:47.601037 | orchestrator | Monday 26 May 2025 03:40:47 +0000 (0:00:03.710) 0:00:28.583 ************ 2025-05-26 03:40:52.386268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:52.387144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:52.388033 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:52.390699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:52.391094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:52.392732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:52.393134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:52.394807 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:52.395784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:52.396606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:52.397076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:52.398170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:52.398643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:52.399253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:52.400178 | orchestrator | 2025-05-26 03:40:52.400735 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-26 03:40:52.401147 | orchestrator | Monday 26 May 2025 03:40:52 +0000 (0:00:04.789) 0:00:33.372 ************ 2025-05-26 03:40:57.735746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:57.736345 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:57.738099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:57.739775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:57.740911 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:57.742281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:57.743042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:57.743885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:57.744502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:57.745410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-26 03:40:57.746349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:57.746851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:57.747786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:57.748067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-26 03:40:57.748529 | orchestrator | 2025-05-26 03:40:57.749200 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-26 03:40:57.749735 | orchestrator | Monday 26 May 2025 03:40:57 +0000 (0:00:05.350) 0:00:38.722 ************ 2025-05-26 03:40:59.120303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 03:40:59.120487 | orchestrator | 2025-05-26 03:40:59.121391 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-26 03:40:59.121906 | orchestrator | Monday 26 May 2025 03:40:59 +0000 (0:00:01.384) 0:00:40.107 ************ 2025-05-26 03:40:59.610487 | orchestrator | ok: [testbed-manager] 2025-05-26 03:40:59.700394 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:41:00.131804 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:41:00.132021 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:41:00.136406 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:41:00.136738 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:41:00.137785 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:41:00.138839 | orchestrator | 2025-05-26 03:41:00.139798 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-26 03:41:00.140902 | orchestrator | Monday 26 May 2025 03:41:00 +0000 (0:00:01.013) 0:00:41.121 ************ 2025-05-26 03:41:00.213284 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-26 03:41:00.213467 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-26 03:41:00.331393 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-26 03:41:00.332016 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-26 03:41:00.332932 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-26 03:41:00.336741 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-26 03:41:00.337568 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-26 03:41:00.338375 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-26 03:41:00.456116 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:41:00.456299 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-26 03:41:00.456956 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-26 03:41:00.457235 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-26 03:41:00.458006 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-26 03:41:00.768895 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:41:00.769299 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-26 03:41:00.770202 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-26 03:41:00.771201 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-26 03:41:00.775090 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-26 03:41:00.865230 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:41:00.866201 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-26 03:41:00.866641 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-26 03:41:00.869011 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-26 03:41:00.978864 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:41:00.979502 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-26 03:41:00.980700 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-26 03:41:00.981458 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-26 03:41:00.982287 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-26 03:41:00.982987 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-26 03:41:02.205231 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:41:02.208141 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:41:02.208172 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-26 03:41:02.208216 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-26 03:41:02.208274 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-26 03:41:02.209003 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-26 03:41:02.209845 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:41:02.210363 | orchestrator | 2025-05-26 03:41:02.211279 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-26 03:41:02.211699 | orchestrator | Monday 26 May 2025 03:41:02 +0000 (0:00:02.070) 0:00:43.191 ************ 2025-05-26 03:41:02.361948 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:41:02.438201 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:41:02.515622 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:41:02.599135 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:41:02.678800 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:41:02.795483 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:41:02.800062 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:41:02.800111 | orchestrator | 2025-05-26 03:41:02.800848 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-26 03:41:02.801270 | orchestrator | Monday 26 May 2025 03:41:02 +0000 (0:00:00.595) 0:00:43.787 ************ 2025-05-26 03:41:03.121910 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:41:03.215878 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:41:03.299881 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:41:03.380433 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:41:03.476421 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:41:03.515479 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:41:03.516614 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:41:03.517487 | orchestrator | 2025-05-26 03:41:03.518721 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:41:03.519037 | orchestrator | 2025-05-26 03:41:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:41:03.519939 | orchestrator | 2025-05-26 03:41:03 | INFO  | Please wait and do not abort execution. 2025-05-26 03:41:03.520157 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 03:41:03.521126 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-26 03:41:03.521690 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-26 03:41:03.522321 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-26 03:41:03.523238 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-26 03:41:03.524878 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-26 03:41:03.525203 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-26 03:41:03.525901 | orchestrator | 2025-05-26 03:41:03.526222 | orchestrator | 2025-05-26 03:41:03.527011 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:41:03.527217 | orchestrator | Monday 26 May 2025 03:41:03 +0000 (0:00:00.720) 0:00:44.507 ************ 2025-05-26 03:41:03.527698 | orchestrator | =============================================================================== 2025-05-26 03:41:03.528121 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.35s 2025-05-26 03:41:03.528566 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.79s 2025-05-26 03:41:03.529029 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.71s 2025-05-26 03:41:03.529415 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.50s 2025-05-26 03:41:03.530344 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.07s 2025-05-26 03:41:03.530518 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.06s 2025-05-26 03:41:03.531062 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.90s 2025-05-26 03:41:03.531638 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.80s 2025-05-26 03:41:03.532285 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2025-05-26 03:41:03.532771 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.65s 2025-05-26 03:41:03.533106 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2025-05-26 03:41:03.533483 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.38s 2025-05-26 03:41:03.533917 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.25s 2025-05-26 03:41:03.534319 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2025-05-26 03:41:03.534781 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-05-26 03:41:03.535067 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-05-26 03:41:03.535379 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2025-05-26 03:41:03.535737 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2025-05-26 03:41:03.536060 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2025-05-26 03:41:03.536500 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.92s 2025-05-26 03:41:04.193935 | orchestrator | + osism apply wireguard 2025-05-26 03:41:05.867839 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:41:05.867940 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:41:05.867954 | orchestrator | Registering Redlock._release_script 2025-05-26 03:41:05.928025 | orchestrator | 2025-05-26 03:41:05 | INFO  | Task e3962274-059d-45f1-a7f0-87660ac12e4a (wireguard) was prepared for execution. 2025-05-26 03:41:05.928119 | orchestrator | 2025-05-26 03:41:05 | INFO  | It takes a moment until task e3962274-059d-45f1-a7f0-87660ac12e4a (wireguard) has been started and output is visible here. 2025-05-26 03:41:10.238698 | orchestrator | 2025-05-26 03:41:10.238815 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-26 03:41:10.239708 | orchestrator | 2025-05-26 03:41:10.241660 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-26 03:41:10.242250 | orchestrator | Monday 26 May 2025 03:41:10 +0000 (0:00:00.235) 0:00:00.235 ************ 2025-05-26 03:41:11.662521 | orchestrator | ok: [testbed-manager] 2025-05-26 03:41:11.663726 | orchestrator | 2025-05-26 03:41:11.665296 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-26 03:41:11.665726 | orchestrator | Monday 26 May 2025 03:41:11 +0000 (0:00:01.425) 0:00:01.661 ************ 2025-05-26 03:41:18.790529 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:18.791827 | orchestrator | 2025-05-26 03:41:18.793441 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-26 03:41:18.793467 | orchestrator | Monday 26 May 2025 03:41:18 +0000 (0:00:07.128) 0:00:08.789 ************ 2025-05-26 03:41:19.359933 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:19.360097 | orchestrator | 2025-05-26 03:41:19.361696 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-26 03:41:19.362319 | orchestrator | Monday 26 May 2025 03:41:19 +0000 (0:00:00.569) 0:00:09.359 ************ 2025-05-26 03:41:19.771343 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:19.772177 | orchestrator | 2025-05-26 03:41:19.772824 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-26 03:41:19.773620 | orchestrator | Monday 26 May 2025 03:41:19 +0000 (0:00:00.410) 0:00:09.770 ************ 2025-05-26 03:41:20.498095 | orchestrator | ok: [testbed-manager] 2025-05-26 03:41:20.499003 | orchestrator | 2025-05-26 03:41:20.499846 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-26 03:41:20.501142 | orchestrator | Monday 26 May 2025 03:41:20 +0000 (0:00:00.726) 0:00:10.496 ************ 2025-05-26 03:41:20.918656 | orchestrator | ok: [testbed-manager] 2025-05-26 03:41:20.919909 | orchestrator | 2025-05-26 03:41:20.920202 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-26 03:41:20.920225 | orchestrator | Monday 26 May 2025 03:41:20 +0000 (0:00:00.421) 0:00:10.917 ************ 2025-05-26 03:41:21.378212 | orchestrator | ok: [testbed-manager] 2025-05-26 03:41:21.378325 | orchestrator | 2025-05-26 03:41:21.378354 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-26 03:41:21.378367 | orchestrator | Monday 26 May 2025 03:41:21 +0000 (0:00:00.457) 0:00:11.374 ************ 2025-05-26 03:41:22.588182 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:22.589090 | orchestrator | 2025-05-26 03:41:22.589862 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-26 03:41:22.590872 | orchestrator | Monday 26 May 2025 03:41:22 +0000 (0:00:01.212) 0:00:12.587 ************ 2025-05-26 03:41:23.448115 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-26 03:41:23.448288 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:23.448887 | orchestrator | 2025-05-26 03:41:23.449294 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-26 03:41:23.449822 | orchestrator | Monday 26 May 2025 03:41:23 +0000 (0:00:00.860) 0:00:13.447 ************ 2025-05-26 03:41:25.235326 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:25.235535 | orchestrator | 2025-05-26 03:41:25.236387 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-26 03:41:25.237294 | orchestrator | Monday 26 May 2025 03:41:25 +0000 (0:00:01.786) 0:00:15.234 ************ 2025-05-26 03:41:26.214273 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:26.214533 | orchestrator | 2025-05-26 03:41:26.216034 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:41:26.216692 | orchestrator | 2025-05-26 03:41:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:41:26.216948 | orchestrator | 2025-05-26 03:41:26 | INFO  | Please wait and do not abort execution. 2025-05-26 03:41:26.218100 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:41:26.218736 | orchestrator | 2025-05-26 03:41:26.219937 | orchestrator | 2025-05-26 03:41:26.220971 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:41:26.221750 | orchestrator | Monday 26 May 2025 03:41:26 +0000 (0:00:00.979) 0:00:16.213 ************ 2025-05-26 03:41:26.222318 | orchestrator | =============================================================================== 2025-05-26 03:41:26.223038 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.13s 2025-05-26 03:41:26.223540 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2025-05-26 03:41:26.224231 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.43s 2025-05-26 03:41:26.224947 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.21s 2025-05-26 03:41:26.225344 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-05-26 03:41:26.226123 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.86s 2025-05-26 03:41:26.226386 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.73s 2025-05-26 03:41:26.226874 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-05-26 03:41:26.227368 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2025-05-26 03:41:26.227854 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2025-05-26 03:41:26.228310 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-05-26 03:41:27.035306 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-26 03:41:27.072659 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-26 03:41:27.072740 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-26 03:41:27.158644 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 162 0 --:--:-- --:--:-- --:--:-- 164 2025-05-26 03:41:27.172101 | orchestrator | + osism apply --environment custom workarounds 2025-05-26 03:41:28.918354 | orchestrator | 2025-05-26 03:41:28 | INFO  | Trying to run play workarounds in environment custom 2025-05-26 03:41:28.923772 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:41:28.923846 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:41:28.923869 | orchestrator | Registering Redlock._release_script 2025-05-26 03:41:28.987325 | orchestrator | 2025-05-26 03:41:28 | INFO  | Task 06c30c90-923e-48f3-8e08-39c7a8ec9a63 (workarounds) was prepared for execution. 2025-05-26 03:41:28.987386 | orchestrator | 2025-05-26 03:41:28 | INFO  | It takes a moment until task 06c30c90-923e-48f3-8e08-39c7a8ec9a63 (workarounds) has been started and output is visible here. 2025-05-26 03:41:32.891436 | orchestrator | 2025-05-26 03:41:32.893326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 03:41:32.893410 | orchestrator | 2025-05-26 03:41:32.895250 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-26 03:41:32.896284 | orchestrator | Monday 26 May 2025 03:41:32 +0000 (0:00:00.146) 0:00:00.147 ************ 2025-05-26 03:41:33.056804 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-26 03:41:33.138521 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-26 03:41:33.220733 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-26 03:41:33.301654 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-26 03:41:33.477216 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-26 03:41:33.644303 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-26 03:41:33.644504 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-26 03:41:33.645419 | orchestrator | 2025-05-26 03:41:33.646404 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-26 03:41:33.648674 | orchestrator | 2025-05-26 03:41:33.649068 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-26 03:41:33.649780 | orchestrator | Monday 26 May 2025 03:41:33 +0000 (0:00:00.756) 0:00:00.903 ************ 2025-05-26 03:41:36.218522 | orchestrator | ok: [testbed-manager] 2025-05-26 03:41:36.219809 | orchestrator | 2025-05-26 03:41:36.219851 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-26 03:41:36.221856 | orchestrator | 2025-05-26 03:41:36.221880 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-26 03:41:36.222652 | orchestrator | Monday 26 May 2025 03:41:36 +0000 (0:00:02.568) 0:00:03.472 ************ 2025-05-26 03:41:38.176441 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:41:38.177838 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:41:38.180093 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:41:38.181905 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:41:38.182805 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:41:38.183260 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:41:38.184414 | orchestrator | 2025-05-26 03:41:38.184752 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-26 03:41:38.185685 | orchestrator | 2025-05-26 03:41:38.186288 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-26 03:41:38.187028 | orchestrator | Monday 26 May 2025 03:41:38 +0000 (0:00:01.957) 0:00:05.429 ************ 2025-05-26 03:41:39.640143 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-26 03:41:39.640252 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-26 03:41:39.640677 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-26 03:41:39.641748 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-26 03:41:39.642578 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-26 03:41:39.642996 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-26 03:41:39.643566 | orchestrator | 2025-05-26 03:41:39.643988 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-26 03:41:39.644360 | orchestrator | Monday 26 May 2025 03:41:39 +0000 (0:00:01.464) 0:00:06.893 ************ 2025-05-26 03:41:43.503874 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:41:43.505250 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:41:43.505442 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:41:43.507071 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:41:43.507770 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:41:43.508332 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:41:43.509068 | orchestrator | 2025-05-26 03:41:43.510274 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-26 03:41:43.511253 | orchestrator | Monday 26 May 2025 03:41:43 +0000 (0:00:03.866) 0:00:10.760 ************ 2025-05-26 03:41:43.663499 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:41:43.735225 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:41:43.807970 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:41:43.886639 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:41:44.169707 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:41:44.169920 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:41:44.171017 | orchestrator | 2025-05-26 03:41:44.172094 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-26 03:41:44.172548 | orchestrator | 2025-05-26 03:41:44.174228 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-26 03:41:44.175967 | orchestrator | Monday 26 May 2025 03:41:44 +0000 (0:00:00.665) 0:00:11.426 ************ 2025-05-26 03:41:45.817864 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:45.817971 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:41:45.818138 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:41:45.818201 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:41:45.819430 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:41:45.824637 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:41:45.825582 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:41:45.828085 | orchestrator | 2025-05-26 03:41:45.829930 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-26 03:41:45.832425 | orchestrator | Monday 26 May 2025 03:41:45 +0000 (0:00:01.646) 0:00:13.073 ************ 2025-05-26 03:41:47.452374 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:47.452487 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:41:47.452509 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:41:47.452654 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:41:47.453166 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:41:47.454151 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:41:47.456105 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:41:47.456815 | orchestrator | 2025-05-26 03:41:47.457491 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-26 03:41:47.458267 | orchestrator | Monday 26 May 2025 03:41:47 +0000 (0:00:01.621) 0:00:14.694 ************ 2025-05-26 03:41:48.951176 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:41:48.951566 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:41:48.952639 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:41:48.953231 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:41:48.953896 | orchestrator | ok: [testbed-manager] 2025-05-26 03:41:48.954482 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:41:48.957460 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:41:48.957483 | orchestrator | 2025-05-26 03:41:48.957496 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-26 03:41:48.957511 | orchestrator | Monday 26 May 2025 03:41:48 +0000 (0:00:01.509) 0:00:16.204 ************ 2025-05-26 03:41:50.784107 | orchestrator | changed: [testbed-manager] 2025-05-26 03:41:50.784345 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:41:50.785333 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:41:50.792722 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:41:50.793697 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:41:50.794600 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:41:50.795744 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:41:50.795968 | orchestrator | 2025-05-26 03:41:50.797130 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-26 03:41:50.797952 | orchestrator | Monday 26 May 2025 03:41:50 +0000 (0:00:01.833) 0:00:18.038 ************ 2025-05-26 03:41:50.941747 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:41:51.014326 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:41:51.110226 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:41:51.187664 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:41:51.264882 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:41:51.383745 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:41:51.385571 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:41:51.386900 | orchestrator | 2025-05-26 03:41:51.388099 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-26 03:41:51.389300 | orchestrator | 2025-05-26 03:41:51.390883 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-26 03:41:51.392697 | orchestrator | Monday 26 May 2025 03:41:51 +0000 (0:00:00.600) 0:00:18.639 ************ 2025-05-26 03:41:54.141048 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:41:54.141909 | orchestrator | ok: [testbed-manager] 2025-05-26 03:41:54.142070 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:41:54.142870 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:41:54.143605 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:41:54.145521 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:41:54.147210 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:41:54.147878 | orchestrator | 2025-05-26 03:41:54.149109 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:41:54.149165 | orchestrator | 2025-05-26 03:41:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:41:54.149217 | orchestrator | 2025-05-26 03:41:54 | INFO  | Please wait and do not abort execution. 2025-05-26 03:41:54.149678 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:41:54.150724 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:41:54.151130 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:41:54.151784 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:41:54.152285 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:41:54.153351 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:41:54.153379 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:41:54.153859 | orchestrator | 2025-05-26 03:41:54.154341 | orchestrator | 2025-05-26 03:41:54.154737 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:41:54.154897 | orchestrator | Monday 26 May 2025 03:41:54 +0000 (0:00:02.757) 0:00:21.397 ************ 2025-05-26 03:41:54.155331 | orchestrator | =============================================================================== 2025-05-26 03:41:54.155756 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.87s 2025-05-26 03:41:54.156058 | orchestrator | Install python3-docker -------------------------------------------------- 2.76s 2025-05-26 03:41:54.156589 | orchestrator | Apply netplan configuration --------------------------------------------- 2.57s 2025-05-26 03:41:54.156928 | orchestrator | Apply netplan configuration --------------------------------------------- 1.96s 2025-05-26 03:41:54.157293 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.83s 2025-05-26 03:41:54.157759 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-05-26 03:41:54.158010 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-05-26 03:41:54.158453 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2025-05-26 03:41:54.158844 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-05-26 03:41:54.159320 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-05-26 03:41:54.159718 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2025-05-26 03:41:54.160498 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2025-05-26 03:41:54.743101 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-26 03:41:56.413366 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:41:56.413469 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:41:56.413486 | orchestrator | Registering Redlock._release_script 2025-05-26 03:41:56.470408 | orchestrator | 2025-05-26 03:41:56 | INFO  | Task 9ae34b55-441a-4b18-b59f-054a79f405c9 (reboot) was prepared for execution. 2025-05-26 03:41:56.470499 | orchestrator | 2025-05-26 03:41:56 | INFO  | It takes a moment until task 9ae34b55-441a-4b18-b59f-054a79f405c9 (reboot) has been started and output is visible here. 2025-05-26 03:42:00.495571 | orchestrator | 2025-05-26 03:42:00.495678 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-26 03:42:00.496233 | orchestrator | 2025-05-26 03:42:00.497658 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-26 03:42:00.498670 | orchestrator | Monday 26 May 2025 03:42:00 +0000 (0:00:00.227) 0:00:00.227 ************ 2025-05-26 03:42:00.597179 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:42:00.597623 | orchestrator | 2025-05-26 03:42:00.598303 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-26 03:42:00.599087 | orchestrator | Monday 26 May 2025 03:42:00 +0000 (0:00:00.108) 0:00:00.335 ************ 2025-05-26 03:42:01.498067 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:42:01.498204 | orchestrator | 2025-05-26 03:42:01.498318 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-26 03:42:01.498337 | orchestrator | Monday 26 May 2025 03:42:01 +0000 (0:00:00.900) 0:00:01.236 ************ 2025-05-26 03:42:01.615919 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:42:01.617330 | orchestrator | 2025-05-26 03:42:01.618759 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-26 03:42:01.619451 | orchestrator | 2025-05-26 03:42:01.620324 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-26 03:42:01.621174 | orchestrator | Monday 26 May 2025 03:42:01 +0000 (0:00:00.114) 0:00:01.350 ************ 2025-05-26 03:42:01.708329 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:42:01.712558 | orchestrator | 2025-05-26 03:42:01.712697 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-26 03:42:01.714012 | orchestrator | Monday 26 May 2025 03:42:01 +0000 (0:00:00.095) 0:00:01.446 ************ 2025-05-26 03:42:02.340047 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:42:02.340487 | orchestrator | 2025-05-26 03:42:02.341186 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-26 03:42:02.342065 | orchestrator | Monday 26 May 2025 03:42:02 +0000 (0:00:00.632) 0:00:02.078 ************ 2025-05-26 03:42:02.446358 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:42:02.446845 | orchestrator | 2025-05-26 03:42:02.448520 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-26 03:42:02.449024 | orchestrator | 2025-05-26 03:42:02.450277 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-26 03:42:02.451351 | orchestrator | Monday 26 May 2025 03:42:02 +0000 (0:00:00.104) 0:00:02.182 ************ 2025-05-26 03:42:02.651108 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:42:02.652857 | orchestrator | 2025-05-26 03:42:02.652891 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-26 03:42:02.653700 | orchestrator | Monday 26 May 2025 03:42:02 +0000 (0:00:00.206) 0:00:02.389 ************ 2025-05-26 03:42:03.292991 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:42:03.294122 | orchestrator | 2025-05-26 03:42:03.295050 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-26 03:42:03.295534 | orchestrator | Monday 26 May 2025 03:42:03 +0000 (0:00:00.642) 0:00:03.031 ************ 2025-05-26 03:42:03.417856 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:42:03.418136 | orchestrator | 2025-05-26 03:42:03.419103 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-26 03:42:03.420011 | orchestrator | 2025-05-26 03:42:03.420866 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-26 03:42:03.421678 | orchestrator | Monday 26 May 2025 03:42:03 +0000 (0:00:00.124) 0:00:03.156 ************ 2025-05-26 03:42:03.510505 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:42:03.510974 | orchestrator | 2025-05-26 03:42:03.511877 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-26 03:42:03.512531 | orchestrator | Monday 26 May 2025 03:42:03 +0000 (0:00:00.092) 0:00:03.248 ************ 2025-05-26 03:42:04.180257 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:42:04.181081 | orchestrator | 2025-05-26 03:42:04.181323 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-26 03:42:04.182346 | orchestrator | Monday 26 May 2025 03:42:04 +0000 (0:00:00.669) 0:00:03.917 ************ 2025-05-26 03:42:04.300408 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:42:04.300956 | orchestrator | 2025-05-26 03:42:04.302012 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-26 03:42:04.303121 | orchestrator | 2025-05-26 03:42:04.304165 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-26 03:42:04.305180 | orchestrator | Monday 26 May 2025 03:42:04 +0000 (0:00:00.119) 0:00:04.036 ************ 2025-05-26 03:42:04.416783 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:42:04.418180 | orchestrator | 2025-05-26 03:42:04.419584 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-26 03:42:04.420904 | orchestrator | Monday 26 May 2025 03:42:04 +0000 (0:00:00.118) 0:00:04.155 ************ 2025-05-26 03:42:05.145239 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:42:05.146950 | orchestrator | 2025-05-26 03:42:05.147503 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-26 03:42:05.148589 | orchestrator | Monday 26 May 2025 03:42:05 +0000 (0:00:00.726) 0:00:04.881 ************ 2025-05-26 03:42:05.254772 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:42:05.255674 | orchestrator | 2025-05-26 03:42:05.256013 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-26 03:42:05.257823 | orchestrator | 2025-05-26 03:42:05.257856 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-26 03:42:05.258561 | orchestrator | Monday 26 May 2025 03:42:05 +0000 (0:00:00.108) 0:00:04.990 ************ 2025-05-26 03:42:05.351921 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:42:05.352164 | orchestrator | 2025-05-26 03:42:05.353275 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-26 03:42:05.354173 | orchestrator | Monday 26 May 2025 03:42:05 +0000 (0:00:00.099) 0:00:05.090 ************ 2025-05-26 03:42:06.026479 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:42:06.026596 | orchestrator | 2025-05-26 03:42:06.027265 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-26 03:42:06.028197 | orchestrator | Monday 26 May 2025 03:42:06 +0000 (0:00:00.674) 0:00:05.764 ************ 2025-05-26 03:42:06.059421 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:42:06.060046 | orchestrator | 2025-05-26 03:42:06.062685 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:42:06.062770 | orchestrator | 2025-05-26 03:42:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:42:06.062788 | orchestrator | 2025-05-26 03:42:06 | INFO  | Please wait and do not abort execution. 2025-05-26 03:42:06.064049 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:42:06.065031 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:42:06.066104 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:42:06.067302 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:42:06.067979 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:42:06.069130 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:42:06.069544 | orchestrator | 2025-05-26 03:42:06.070403 | orchestrator | 2025-05-26 03:42:06.071185 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:42:06.071834 | orchestrator | Monday 26 May 2025 03:42:06 +0000 (0:00:00.034) 0:00:05.798 ************ 2025-05-26 03:42:06.072548 | orchestrator | =============================================================================== 2025-05-26 03:42:06.073009 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2025-05-26 03:42:06.073825 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.72s 2025-05-26 03:42:06.074499 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2025-05-26 03:42:06.586422 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-26 03:42:08.286956 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:42:08.287078 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:42:08.287092 | orchestrator | Registering Redlock._release_script 2025-05-26 03:42:08.348826 | orchestrator | 2025-05-26 03:42:08 | INFO  | Task cd2cde5b-ebb9-4793-a30d-ebb22034f648 (wait-for-connection) was prepared for execution. 2025-05-26 03:42:08.348939 | orchestrator | 2025-05-26 03:42:08 | INFO  | It takes a moment until task cd2cde5b-ebb9-4793-a30d-ebb22034f648 (wait-for-connection) has been started and output is visible here. 2025-05-26 03:42:12.292165 | orchestrator | 2025-05-26 03:42:12.293111 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-26 03:42:12.296190 | orchestrator | 2025-05-26 03:42:12.297984 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-26 03:42:12.299386 | orchestrator | Monday 26 May 2025 03:42:12 +0000 (0:00:00.231) 0:00:00.231 ************ 2025-05-26 03:42:24.227349 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:42:24.227524 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:42:24.227567 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:42:24.227691 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:42:24.227710 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:42:24.227721 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:42:24.227772 | orchestrator | 2025-05-26 03:42:24.227833 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:42:24.229042 | orchestrator | 2025-05-26 03:42:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:42:24.229093 | orchestrator | 2025-05-26 03:42:24 | INFO  | Please wait and do not abort execution. 2025-05-26 03:42:24.229181 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:42:24.231446 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:42:24.231522 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:42:24.231537 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:42:24.231550 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:42:24.231562 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:42:24.231782 | orchestrator | 2025-05-26 03:42:24.231996 | orchestrator | 2025-05-26 03:42:24.232333 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:42:24.232768 | orchestrator | Monday 26 May 2025 03:42:24 +0000 (0:00:11.931) 0:00:12.162 ************ 2025-05-26 03:42:24.233155 | orchestrator | =============================================================================== 2025-05-26 03:42:24.233439 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.93s 2025-05-26 03:42:24.775327 | orchestrator | + osism apply hddtemp 2025-05-26 03:42:26.442351 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:42:26.442454 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:42:26.442468 | orchestrator | Registering Redlock._release_script 2025-05-26 03:42:26.500154 | orchestrator | 2025-05-26 03:42:26 | INFO  | Task 2a556d89-f6e6-460e-acc4-b2ddbc2e6c47 (hddtemp) was prepared for execution. 2025-05-26 03:42:26.500282 | orchestrator | 2025-05-26 03:42:26 | INFO  | It takes a moment until task 2a556d89-f6e6-460e-acc4-b2ddbc2e6c47 (hddtemp) has been started and output is visible here. 2025-05-26 03:42:30.511897 | orchestrator | 2025-05-26 03:42:30.512076 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-26 03:42:30.512298 | orchestrator | 2025-05-26 03:42:30.515817 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-26 03:42:30.517385 | orchestrator | Monday 26 May 2025 03:42:30 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-26 03:42:30.670081 | orchestrator | ok: [testbed-manager] 2025-05-26 03:42:30.765395 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:42:30.842071 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:42:30.918600 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:42:31.107033 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:42:31.295458 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:42:31.296993 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:42:31.299280 | orchestrator | 2025-05-26 03:42:31.300391 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-26 03:42:31.301835 | orchestrator | Monday 26 May 2025 03:42:31 +0000 (0:00:00.780) 0:00:01.042 ************ 2025-05-26 03:42:32.470397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 03:42:32.473812 | orchestrator | 2025-05-26 03:42:32.473849 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-26 03:42:32.475191 | orchestrator | Monday 26 May 2025 03:42:32 +0000 (0:00:01.176) 0:00:02.218 ************ 2025-05-26 03:42:34.353875 | orchestrator | ok: [testbed-manager] 2025-05-26 03:42:34.358739 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:42:34.360507 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:42:34.364888 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:42:34.366063 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:42:34.366875 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:42:34.367971 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:42:34.368808 | orchestrator | 2025-05-26 03:42:34.369562 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-26 03:42:34.370276 | orchestrator | Monday 26 May 2025 03:42:34 +0000 (0:00:01.885) 0:00:04.103 ************ 2025-05-26 03:42:34.955500 | orchestrator | changed: [testbed-manager] 2025-05-26 03:42:35.037408 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:42:35.475233 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:42:35.478453 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:42:35.479656 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:42:35.480268 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:42:35.481039 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:42:35.482676 | orchestrator | 2025-05-26 03:42:35.485751 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-26 03:42:35.486139 | orchestrator | Monday 26 May 2025 03:42:35 +0000 (0:00:01.118) 0:00:05.222 ************ 2025-05-26 03:42:36.562333 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:42:36.565270 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:42:36.565308 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:42:36.565321 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:42:36.566141 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:42:36.567513 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:42:36.568771 | orchestrator | ok: [testbed-manager] 2025-05-26 03:42:36.569521 | orchestrator | 2025-05-26 03:42:36.570341 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-26 03:42:36.571291 | orchestrator | Monday 26 May 2025 03:42:36 +0000 (0:00:01.085) 0:00:06.307 ************ 2025-05-26 03:42:36.993358 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:42:37.077214 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:42:37.158241 | orchestrator | changed: [testbed-manager] 2025-05-26 03:42:37.233653 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:42:37.371437 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:42:37.372808 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:42:37.375159 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:42:37.376560 | orchestrator | 2025-05-26 03:42:37.378361 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-26 03:42:37.379611 | orchestrator | Monday 26 May 2025 03:42:37 +0000 (0:00:00.815) 0:00:07.123 ************ 2025-05-26 03:42:49.557002 | orchestrator | changed: [testbed-manager] 2025-05-26 03:42:49.557122 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:42:49.557147 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:42:49.557165 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:42:49.557184 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:42:49.557558 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:42:49.558170 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:42:49.559042 | orchestrator | 2025-05-26 03:42:49.559775 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-26 03:42:49.560382 | orchestrator | Monday 26 May 2025 03:42:49 +0000 (0:00:12.174) 0:00:19.298 ************ 2025-05-26 03:42:50.735508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 03:42:50.735819 | orchestrator | 2025-05-26 03:42:50.736673 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-26 03:42:50.737076 | orchestrator | Monday 26 May 2025 03:42:50 +0000 (0:00:01.185) 0:00:20.483 ************ 2025-05-26 03:42:52.542539 | orchestrator | changed: [testbed-manager] 2025-05-26 03:42:52.542648 | orchestrator | changed: [testbed-node-1] 2025-05-26 03:42:52.542662 | orchestrator | changed: [testbed-node-0] 2025-05-26 03:42:52.542885 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:42:52.544777 | orchestrator | changed: [testbed-node-2] 2025-05-26 03:42:52.546971 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:42:52.547283 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:42:52.548884 | orchestrator | 2025-05-26 03:42:52.549845 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:42:52.550128 | orchestrator | 2025-05-26 03:42:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:42:52.550372 | orchestrator | 2025-05-26 03:42:52 | INFO  | Please wait and do not abort execution. 2025-05-26 03:42:52.551409 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:42:52.552372 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:42:52.552890 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:42:52.553497 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:42:52.554345 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:42:52.555467 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:42:52.556412 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:42:52.557396 | orchestrator | 2025-05-26 03:42:52.558105 | orchestrator | 2025-05-26 03:42:52.559011 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:42:52.559236 | orchestrator | Monday 26 May 2025 03:42:52 +0000 (0:00:01.807) 0:00:22.291 ************ 2025-05-26 03:42:52.560083 | orchestrator | =============================================================================== 2025-05-26 03:42:52.560864 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.17s 2025-05-26 03:42:52.561689 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.89s 2025-05-26 03:42:52.562343 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.81s 2025-05-26 03:42:52.562903 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2025-05-26 03:42:52.563787 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-05-26 03:42:52.564554 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-05-26 03:42:52.565250 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2025-05-26 03:42:52.565768 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2025-05-26 03:42:52.567532 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.78s 2025-05-26 03:42:53.156518 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-26 03:42:54.620670 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-26 03:42:54.621577 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-26 03:42:54.621618 | orchestrator | + local max_attempts=60 2025-05-26 03:42:54.621641 | orchestrator | + local name=ceph-ansible 2025-05-26 03:42:54.621661 | orchestrator | + local attempt_num=1 2025-05-26 03:42:54.621682 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-26 03:42:54.655336 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-26 03:42:54.655434 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-26 03:42:54.655448 | orchestrator | + local max_attempts=60 2025-05-26 03:42:54.655460 | orchestrator | + local name=kolla-ansible 2025-05-26 03:42:54.655472 | orchestrator | + local attempt_num=1 2025-05-26 03:42:54.655483 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-26 03:42:54.688107 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-26 03:42:54.688182 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-26 03:42:54.688196 | orchestrator | + local max_attempts=60 2025-05-26 03:42:54.688207 | orchestrator | + local name=osism-ansible 2025-05-26 03:42:54.688218 | orchestrator | + local attempt_num=1 2025-05-26 03:42:54.688230 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-26 03:42:54.714380 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-26 03:42:54.714448 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-26 03:42:54.714462 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-26 03:42:54.867168 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-26 03:42:55.010175 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-26 03:42:55.171564 | orchestrator | ARA in osism-ansible already disabled. 2025-05-26 03:42:55.367952 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-26 03:42:55.368489 | orchestrator | + osism apply gather-facts 2025-05-26 03:42:57.026931 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:42:57.027032 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:42:57.027047 | orchestrator | Registering Redlock._release_script 2025-05-26 03:42:57.091420 | orchestrator | 2025-05-26 03:42:57 | INFO  | Task b6ec0e15-9425-430e-8a2c-d63e5d48d596 (gather-facts) was prepared for execution. 2025-05-26 03:42:57.091513 | orchestrator | 2025-05-26 03:42:57 | INFO  | It takes a moment until task b6ec0e15-9425-430e-8a2c-d63e5d48d596 (gather-facts) has been started and output is visible here. 2025-05-26 03:43:00.828157 | orchestrator | 2025-05-26 03:43:00.829914 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-26 03:43:00.830632 | orchestrator | 2025-05-26 03:43:00.831416 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-26 03:43:00.833599 | orchestrator | Monday 26 May 2025 03:43:00 +0000 (0:00:00.165) 0:00:00.165 ************ 2025-05-26 03:43:05.609141 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:43:05.609249 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:43:05.610065 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:43:05.610704 | orchestrator | ok: [testbed-manager] 2025-05-26 03:43:05.611140 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:43:05.611559 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:43:05.612561 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:43:05.612582 | orchestrator | 2025-05-26 03:43:05.613216 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-26 03:43:05.613576 | orchestrator | 2025-05-26 03:43:05.614069 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-26 03:43:05.614586 | orchestrator | Monday 26 May 2025 03:43:05 +0000 (0:00:04.784) 0:00:04.949 ************ 2025-05-26 03:43:05.754611 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:43:05.835074 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:43:05.915166 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:43:05.990100 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:43:06.067043 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:43:06.102815 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:43:06.103053 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:43:06.103810 | orchestrator | 2025-05-26 03:43:06.105128 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:43:06.105364 | orchestrator | 2025-05-26 03:43:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:43:06.105873 | orchestrator | 2025-05-26 03:43:06 | INFO  | Please wait and do not abort execution. 2025-05-26 03:43:06.106525 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:43:06.107137 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:43:06.107810 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:43:06.108810 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:43:06.109552 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:43:06.110265 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:43:06.111611 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-26 03:43:06.112357 | orchestrator | 2025-05-26 03:43:06.113255 | orchestrator | 2025-05-26 03:43:06.113604 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:43:06.114444 | orchestrator | Monday 26 May 2025 03:43:06 +0000 (0:00:00.494) 0:00:05.444 ************ 2025-05-26 03:43:06.115152 | orchestrator | =============================================================================== 2025-05-26 03:43:06.117309 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2025-05-26 03:43:06.118013 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-05-26 03:43:06.736023 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-26 03:43:06.749511 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-26 03:43:06.761427 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-26 03:43:06.771976 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-26 03:43:06.783081 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-26 03:43:06.794819 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-26 03:43:06.810321 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-26 03:43:06.827352 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-26 03:43:06.845763 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-26 03:43:06.862295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-26 03:43:06.876069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-26 03:43:06.888641 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-26 03:43:06.905878 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-26 03:43:06.922990 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-26 03:43:06.938138 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-26 03:43:06.955357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-26 03:43:06.969493 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-26 03:43:06.990624 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-26 03:43:07.006569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-26 03:43:07.022363 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-26 03:43:07.039789 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-26 03:43:07.311420 | orchestrator | ok: Runtime: 0:24:51.177179 2025-05-26 03:43:07.424508 | 2025-05-26 03:43:07.424686 | TASK [Deploy services] 2025-05-26 03:43:07.958931 | orchestrator | skipping: Conditional result was False 2025-05-26 03:43:07.977000 | 2025-05-26 03:43:07.977178 | TASK [Deploy in a nutshell] 2025-05-26 03:43:08.646590 | orchestrator | + set -e 2025-05-26 03:43:08.646768 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-26 03:43:08.646788 | orchestrator | ++ export INTERACTIVE=false 2025-05-26 03:43:08.646804 | orchestrator | ++ INTERACTIVE=false 2025-05-26 03:43:08.646813 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-26 03:43:08.646821 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-26 03:43:08.646831 | orchestrator | + source /opt/manager-vars.sh 2025-05-26 03:43:08.646862 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-26 03:43:08.646881 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-26 03:43:08.646891 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-26 03:43:08.646901 | orchestrator | ++ CEPH_VERSION=reef 2025-05-26 03:43:08.646910 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-26 03:43:08.646922 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-26 03:43:08.646930 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-26 03:43:08.646944 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-26 03:43:08.646952 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-26 03:43:08.646961 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-26 03:43:08.646969 | orchestrator | ++ export ARA=false 2025-05-26 03:43:08.646976 | orchestrator | ++ ARA=false 2025-05-26 03:43:08.646984 | orchestrator | ++ export TEMPEST=true 2025-05-26 03:43:08.646991 | orchestrator | ++ TEMPEST=true 2025-05-26 03:43:08.646998 | orchestrator | ++ export IS_ZUUL=true 2025-05-26 03:43:08.647006 | orchestrator | ++ IS_ZUUL=true 2025-05-26 03:43:08.647013 | orchestrator | 2025-05-26 03:43:08.647021 | orchestrator | # PULL IMAGES 2025-05-26 03:43:08.647028 | orchestrator | 2025-05-26 03:43:08.647036 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-05-26 03:43:08.647043 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.90 2025-05-26 03:43:08.647051 | orchestrator | ++ export EXTERNAL_API=false 2025-05-26 03:43:08.647058 | orchestrator | ++ EXTERNAL_API=false 2025-05-26 03:43:08.647065 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-26 03:43:08.647072 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-26 03:43:08.647079 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-26 03:43:08.647087 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-26 03:43:08.647094 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-26 03:43:08.647102 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-26 03:43:08.647109 | orchestrator | + echo 2025-05-26 03:43:08.647116 | orchestrator | + echo '# PULL IMAGES' 2025-05-26 03:43:08.647124 | orchestrator | + echo 2025-05-26 03:43:08.647284 | orchestrator | ++ semver latest 7.0.0 2025-05-26 03:43:08.705923 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-26 03:43:08.706012 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-26 03:43:08.706065 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-26 03:43:10.368542 | orchestrator | 2025-05-26 03:43:10 | INFO  | Trying to run play pull-images in environment custom 2025-05-26 03:43:10.372985 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:43:10.373017 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:43:10.373028 | orchestrator | Registering Redlock._release_script 2025-05-26 03:43:10.432148 | orchestrator | 2025-05-26 03:43:10 | INFO  | Task de0f2d62-56b8-47a0-a0ee-860cefc44250 (pull-images) was prepared for execution. 2025-05-26 03:43:10.432241 | orchestrator | 2025-05-26 03:43:10 | INFO  | It takes a moment until task de0f2d62-56b8-47a0-a0ee-860cefc44250 (pull-images) has been started and output is visible here. 2025-05-26 03:43:14.313235 | orchestrator | 2025-05-26 03:43:14.313361 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-26 03:43:14.315279 | orchestrator | 2025-05-26 03:43:14.315691 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-26 03:43:14.316591 | orchestrator | Monday 26 May 2025 03:43:14 +0000 (0:00:00.148) 0:00:00.148 ************ 2025-05-26 03:44:23.932691 | orchestrator | changed: [testbed-manager] 2025-05-26 03:44:23.932905 | orchestrator | 2025-05-26 03:44:23.932931 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-26 03:44:23.932944 | orchestrator | Monday 26 May 2025 03:44:23 +0000 (0:01:09.617) 0:01:09.765 ************ 2025-05-26 03:45:17.835776 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-26 03:45:17.835960 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-26 03:45:17.835980 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-26 03:45:17.835995 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-26 03:45:17.836624 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-26 03:45:17.836983 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-26 03:45:17.837818 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-26 03:45:17.838428 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-26 03:45:17.838627 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-26 03:45:17.839139 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-26 03:45:17.839701 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-26 03:45:17.840099 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-26 03:45:17.840709 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-26 03:45:17.841171 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-26 03:45:17.841943 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-26 03:45:17.842355 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-26 03:45:17.843091 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-26 03:45:17.843370 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-26 03:45:17.843900 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-26 03:45:17.844403 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-26 03:45:17.844703 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-26 03:45:17.845195 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-26 03:45:17.845652 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-26 03:45:17.846250 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-26 03:45:17.846541 | orchestrator | 2025-05-26 03:45:17.847058 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:45:17.848032 | orchestrator | 2025-05-26 03:45:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:45:17.848060 | orchestrator | 2025-05-26 03:45:17 | INFO  | Please wait and do not abort execution. 2025-05-26 03:45:17.848074 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 03:45:17.848553 | orchestrator | 2025-05-26 03:45:17.849008 | orchestrator | 2025-05-26 03:45:17.849413 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:45:17.849784 | orchestrator | Monday 26 May 2025 03:45:17 +0000 (0:00:53.903) 0:02:03.669 ************ 2025-05-26 03:45:17.850153 | orchestrator | =============================================================================== 2025-05-26 03:45:17.850494 | orchestrator | Pull keystone image ---------------------------------------------------- 69.62s 2025-05-26 03:45:17.850927 | orchestrator | Pull other images ------------------------------------------------------ 53.90s 2025-05-26 03:45:20.123467 | orchestrator | 2025-05-26 03:45:20 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-26 03:45:20.128291 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:45:20.128353 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:45:20.128366 | orchestrator | Registering Redlock._release_script 2025-05-26 03:45:20.186393 | orchestrator | 2025-05-26 03:45:20 | INFO  | Task f32d3775-559e-4848-80e8-f630d302fb3d (wipe-partitions) was prepared for execution. 2025-05-26 03:45:20.186458 | orchestrator | 2025-05-26 03:45:20 | INFO  | It takes a moment until task f32d3775-559e-4848-80e8-f630d302fb3d (wipe-partitions) has been started and output is visible here. 2025-05-26 03:45:24.130395 | orchestrator | 2025-05-26 03:45:24.131714 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-26 03:45:24.134107 | orchestrator | 2025-05-26 03:45:24.134715 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-26 03:45:24.135110 | orchestrator | Monday 26 May 2025 03:45:24 +0000 (0:00:00.131) 0:00:00.131 ************ 2025-05-26 03:45:24.728828 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:45:24.729022 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:45:24.729042 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:45:24.729114 | orchestrator | 2025-05-26 03:45:24.729450 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-26 03:45:24.730463 | orchestrator | Monday 26 May 2025 03:45:24 +0000 (0:00:00.604) 0:00:00.735 ************ 2025-05-26 03:45:24.904101 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:25.008752 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:45:25.010594 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:45:25.011299 | orchestrator | 2025-05-26 03:45:25.015829 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-26 03:45:25.016564 | orchestrator | Monday 26 May 2025 03:45:25 +0000 (0:00:00.278) 0:00:01.013 ************ 2025-05-26 03:45:25.787822 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:45:25.788126 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:45:25.790462 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:45:25.792600 | orchestrator | 2025-05-26 03:45:25.792635 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-26 03:45:25.792648 | orchestrator | Monday 26 May 2025 03:45:25 +0000 (0:00:00.781) 0:00:01.794 ************ 2025-05-26 03:45:25.948568 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:26.065035 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:45:26.065150 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:45:26.065651 | orchestrator | 2025-05-26 03:45:26.066213 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-26 03:45:26.067970 | orchestrator | Monday 26 May 2025 03:45:26 +0000 (0:00:00.277) 0:00:02.072 ************ 2025-05-26 03:45:27.211152 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-26 03:45:27.211315 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-26 03:45:27.211342 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-26 03:45:27.211634 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-26 03:45:27.211752 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-26 03:45:27.211933 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-26 03:45:27.212297 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-26 03:45:27.212600 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-26 03:45:27.213400 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-26 03:45:27.213617 | orchestrator | 2025-05-26 03:45:27.214143 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-26 03:45:27.214420 | orchestrator | Monday 26 May 2025 03:45:27 +0000 (0:00:01.146) 0:00:03.218 ************ 2025-05-26 03:45:28.462462 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-26 03:45:28.462716 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-26 03:45:28.463025 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-26 03:45:28.463212 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-26 03:45:28.463624 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-26 03:45:28.463985 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-26 03:45:28.464226 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-26 03:45:28.464629 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-26 03:45:28.464995 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-26 03:45:28.465308 | orchestrator | 2025-05-26 03:45:28.465624 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-26 03:45:28.468542 | orchestrator | Monday 26 May 2025 03:45:28 +0000 (0:00:01.248) 0:00:04.467 ************ 2025-05-26 03:45:30.648731 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-26 03:45:30.649733 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-26 03:45:30.651550 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-26 03:45:30.653850 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-26 03:45:30.655185 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-26 03:45:30.659497 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-26 03:45:30.659533 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-26 03:45:30.659545 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-26 03:45:30.659557 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-26 03:45:30.659569 | orchestrator | 2025-05-26 03:45:30.659583 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-26 03:45:30.660564 | orchestrator | Monday 26 May 2025 03:45:30 +0000 (0:00:02.186) 0:00:06.654 ************ 2025-05-26 03:45:31.219820 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:45:31.219973 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:45:31.219989 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:45:31.220000 | orchestrator | 2025-05-26 03:45:31.220013 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-26 03:45:31.220663 | orchestrator | Monday 26 May 2025 03:45:31 +0000 (0:00:00.563) 0:00:07.217 ************ 2025-05-26 03:45:31.819175 | orchestrator | changed: [testbed-node-3] 2025-05-26 03:45:31.819326 | orchestrator | changed: [testbed-node-4] 2025-05-26 03:45:31.821360 | orchestrator | changed: [testbed-node-5] 2025-05-26 03:45:31.821450 | orchestrator | 2025-05-26 03:45:31.821527 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:45:31.824815 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:31.824856 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:31.824867 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:31.824930 | orchestrator | 2025-05-26 03:45:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:45:31.824944 | orchestrator | 2025-05-26 03:45:31 | INFO  | Please wait and do not abort execution. 2025-05-26 03:45:31.824956 | orchestrator | 2025-05-26 03:45:31.825101 | orchestrator | 2025-05-26 03:45:31.825790 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:45:31.825816 | orchestrator | Monday 26 May 2025 03:45:31 +0000 (0:00:00.601) 0:00:07.818 ************ 2025-05-26 03:45:31.828285 | orchestrator | =============================================================================== 2025-05-26 03:45:31.828425 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2025-05-26 03:45:31.828702 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.25s 2025-05-26 03:45:31.829023 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-05-26 03:45:31.829246 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.78s 2025-05-26 03:45:31.829533 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2025-05-26 03:45:31.829724 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-05-26 03:45:31.831592 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2025-05-26 03:45:31.831909 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2025-05-26 03:45:31.832191 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-05-26 03:45:34.446072 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:45:34.446188 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:45:34.446205 | orchestrator | Registering Redlock._release_script 2025-05-26 03:45:34.507103 | orchestrator | 2025-05-26 03:45:34 | INFO  | Task 041b8699-4455-43bc-a1a6-bff62075fda3 (facts) was prepared for execution. 2025-05-26 03:45:34.507203 | orchestrator | 2025-05-26 03:45:34 | INFO  | It takes a moment until task 041b8699-4455-43bc-a1a6-bff62075fda3 (facts) has been started and output is visible here. 2025-05-26 03:45:38.313373 | orchestrator | 2025-05-26 03:45:38.313514 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-26 03:45:38.315419 | orchestrator | 2025-05-26 03:45:38.315448 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-26 03:45:38.317497 | orchestrator | Monday 26 May 2025 03:45:38 +0000 (0:00:00.251) 0:00:00.251 ************ 2025-05-26 03:45:39.325846 | orchestrator | ok: [testbed-manager] 2025-05-26 03:45:39.326090 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:45:39.326609 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:45:39.327329 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:45:39.327806 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:45:39.328616 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:45:39.328871 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:45:39.329326 | orchestrator | 2025-05-26 03:45:39.329821 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-26 03:45:39.330972 | orchestrator | Monday 26 May 2025 03:45:39 +0000 (0:00:01.016) 0:00:01.267 ************ 2025-05-26 03:45:39.466534 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:45:39.535532 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:45:39.608745 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:45:39.674744 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:45:39.759427 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:40.402212 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:45:40.402479 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:45:40.403125 | orchestrator | 2025-05-26 03:45:40.406712 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-26 03:45:40.407597 | orchestrator | 2025-05-26 03:45:40.408999 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-26 03:45:40.409436 | orchestrator | Monday 26 May 2025 03:45:40 +0000 (0:00:01.080) 0:00:02.347 ************ 2025-05-26 03:45:45.061105 | orchestrator | ok: [testbed-node-1] 2025-05-26 03:45:45.063563 | orchestrator | ok: [testbed-node-2] 2025-05-26 03:45:45.065362 | orchestrator | ok: [testbed-manager] 2025-05-26 03:45:45.067088 | orchestrator | ok: [testbed-node-0] 2025-05-26 03:45:45.068121 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:45:45.069783 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:45:45.071052 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:45:45.072218 | orchestrator | 2025-05-26 03:45:45.075713 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-26 03:45:45.075762 | orchestrator | 2025-05-26 03:45:45.075771 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-26 03:45:45.075778 | orchestrator | Monday 26 May 2025 03:45:45 +0000 (0:00:04.653) 0:00:07.001 ************ 2025-05-26 03:45:45.213167 | orchestrator | skipping: [testbed-manager] 2025-05-26 03:45:45.293389 | orchestrator | skipping: [testbed-node-0] 2025-05-26 03:45:45.371880 | orchestrator | skipping: [testbed-node-1] 2025-05-26 03:45:45.455322 | orchestrator | skipping: [testbed-node-2] 2025-05-26 03:45:45.533437 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:45.574639 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:45:45.576060 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:45:45.578130 | orchestrator | 2025-05-26 03:45:45.580468 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:45:45.580736 | orchestrator | 2025-05-26 03:45:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:45:45.580772 | orchestrator | 2025-05-26 03:45:45 | INFO  | Please wait and do not abort execution. 2025-05-26 03:45:45.582921 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:45.584377 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:45.585318 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:45.587938 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:45.589119 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:45.590129 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:45.591026 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 03:45:45.591827 | orchestrator | 2025-05-26 03:45:45.592566 | orchestrator | 2025-05-26 03:45:45.593422 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:45:45.593855 | orchestrator | Monday 26 May 2025 03:45:45 +0000 (0:00:00.517) 0:00:07.518 ************ 2025-05-26 03:45:45.594449 | orchestrator | =============================================================================== 2025-05-26 03:45:45.595060 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2025-05-26 03:45:45.595811 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-05-26 03:45:45.596389 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2025-05-26 03:45:45.596699 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-05-26 03:45:48.076463 | orchestrator | 2025-05-26 03:45:48 | INFO  | Task e021afb7-9a7e-4490-98b0-c6fc60532811 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-26 03:45:48.076539 | orchestrator | 2025-05-26 03:45:48 | INFO  | It takes a moment until task e021afb7-9a7e-4490-98b0-c6fc60532811 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-26 03:45:52.773663 | orchestrator | 2025-05-26 03:45:52.774708 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-26 03:45:52.775561 | orchestrator | 2025-05-26 03:45:52.776819 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-26 03:45:52.777488 | orchestrator | Monday 26 May 2025 03:45:52 +0000 (0:00:00.408) 0:00:00.408 ************ 2025-05-26 03:45:53.025802 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-26 03:45:53.028319 | orchestrator | 2025-05-26 03:45:53.029484 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-26 03:45:53.030287 | orchestrator | Monday 26 May 2025 03:45:53 +0000 (0:00:00.254) 0:00:00.663 ************ 2025-05-26 03:45:53.257179 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:45:53.257344 | orchestrator | 2025-05-26 03:45:53.257669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:53.258162 | orchestrator | Monday 26 May 2025 03:45:53 +0000 (0:00:00.231) 0:00:00.894 ************ 2025-05-26 03:45:53.603094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-26 03:45:53.603226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-26 03:45:53.604347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-26 03:45:53.605800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-26 03:45:53.606729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-26 03:45:53.608295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-26 03:45:53.608940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-26 03:45:53.610069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-26 03:45:53.612819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-26 03:45:53.613345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-26 03:45:53.614470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-26 03:45:53.615678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-26 03:45:53.616779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-26 03:45:53.619186 | orchestrator | 2025-05-26 03:45:53.620537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:53.621346 | orchestrator | Monday 26 May 2025 03:45:53 +0000 (0:00:00.347) 0:00:01.242 ************ 2025-05-26 03:45:54.160534 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:54.160767 | orchestrator | 2025-05-26 03:45:54.161207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:54.162187 | orchestrator | Monday 26 May 2025 03:45:54 +0000 (0:00:00.559) 0:00:01.801 ************ 2025-05-26 03:45:54.358703 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:54.359346 | orchestrator | 2025-05-26 03:45:54.359664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:54.359978 | orchestrator | Monday 26 May 2025 03:45:54 +0000 (0:00:00.198) 0:00:02.000 ************ 2025-05-26 03:45:54.595566 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:54.599833 | orchestrator | 2025-05-26 03:45:54.604886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:54.606241 | orchestrator | Monday 26 May 2025 03:45:54 +0000 (0:00:00.232) 0:00:02.232 ************ 2025-05-26 03:45:54.778325 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:54.778429 | orchestrator | 2025-05-26 03:45:54.778444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:54.778458 | orchestrator | Monday 26 May 2025 03:45:54 +0000 (0:00:00.182) 0:00:02.415 ************ 2025-05-26 03:45:54.996645 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:54.997883 | orchestrator | 2025-05-26 03:45:54.999324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:55.000226 | orchestrator | Monday 26 May 2025 03:45:54 +0000 (0:00:00.219) 0:00:02.634 ************ 2025-05-26 03:45:55.186297 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:55.186408 | orchestrator | 2025-05-26 03:45:55.190188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:55.192293 | orchestrator | Monday 26 May 2025 03:45:55 +0000 (0:00:00.192) 0:00:02.827 ************ 2025-05-26 03:45:55.375933 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:55.376281 | orchestrator | 2025-05-26 03:45:55.376711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:55.378282 | orchestrator | Monday 26 May 2025 03:45:55 +0000 (0:00:00.189) 0:00:03.016 ************ 2025-05-26 03:45:55.561186 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:55.562474 | orchestrator | 2025-05-26 03:45:55.563638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:55.564828 | orchestrator | Monday 26 May 2025 03:45:55 +0000 (0:00:00.186) 0:00:03.203 ************ 2025-05-26 03:45:56.037359 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d4bdcb5-a0b8-4173-af9e-b961e366e943) 2025-05-26 03:45:56.039012 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d4bdcb5-a0b8-4173-af9e-b961e366e943) 2025-05-26 03:45:56.042382 | orchestrator | 2025-05-26 03:45:56.043004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:56.044162 | orchestrator | Monday 26 May 2025 03:45:56 +0000 (0:00:00.475) 0:00:03.679 ************ 2025-05-26 03:45:56.497656 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_feee3a86-288f-4310-9e74-72f077da2d2c) 2025-05-26 03:45:56.500467 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_feee3a86-288f-4310-9e74-72f077da2d2c) 2025-05-26 03:45:56.500791 | orchestrator | 2025-05-26 03:45:56.501207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:56.501558 | orchestrator | Monday 26 May 2025 03:45:56 +0000 (0:00:00.457) 0:00:04.137 ************ 2025-05-26 03:45:57.001635 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d6e6216b-cbe0-4182-a9d6-b0841cd13c95) 2025-05-26 03:45:57.003015 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d6e6216b-cbe0-4182-a9d6-b0841cd13c95) 2025-05-26 03:45:57.004637 | orchestrator | 2025-05-26 03:45:57.004934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:57.006741 | orchestrator | Monday 26 May 2025 03:45:56 +0000 (0:00:00.507) 0:00:04.645 ************ 2025-05-26 03:45:57.626503 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c087d35e-df49-49d8-817c-07623fd598fd) 2025-05-26 03:45:57.626671 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c087d35e-df49-49d8-817c-07623fd598fd) 2025-05-26 03:45:57.627319 | orchestrator | 2025-05-26 03:45:57.628082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:45:57.629775 | orchestrator | Monday 26 May 2025 03:45:57 +0000 (0:00:00.622) 0:00:05.267 ************ 2025-05-26 03:45:58.146341 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-26 03:45:58.146559 | orchestrator | 2025-05-26 03:45:58.146899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:58.147263 | orchestrator | Monday 26 May 2025 03:45:58 +0000 (0:00:00.518) 0:00:05.786 ************ 2025-05-26 03:45:58.438638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-26 03:45:58.439987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-26 03:45:58.440411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-26 03:45:58.441136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-26 03:45:58.443549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-26 03:45:58.443776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-26 03:45:58.444285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-26 03:45:58.444584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-26 03:45:58.444751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-26 03:45:58.445258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-26 03:45:58.445387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-26 03:45:58.445690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-26 03:45:58.446049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-26 03:45:58.446295 | orchestrator | 2025-05-26 03:45:58.446545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:58.446901 | orchestrator | Monday 26 May 2025 03:45:58 +0000 (0:00:00.292) 0:00:06.078 ************ 2025-05-26 03:45:58.608503 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:58.609017 | orchestrator | 2025-05-26 03:45:58.609533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:58.609789 | orchestrator | Monday 26 May 2025 03:45:58 +0000 (0:00:00.171) 0:00:06.250 ************ 2025-05-26 03:45:58.777013 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:58.780182 | orchestrator | 2025-05-26 03:45:58.780825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:58.781946 | orchestrator | Monday 26 May 2025 03:45:58 +0000 (0:00:00.168) 0:00:06.419 ************ 2025-05-26 03:45:58.954408 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:58.954616 | orchestrator | 2025-05-26 03:45:58.954808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:58.955288 | orchestrator | Monday 26 May 2025 03:45:58 +0000 (0:00:00.178) 0:00:06.598 ************ 2025-05-26 03:45:59.131664 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:59.132052 | orchestrator | 2025-05-26 03:45:59.132276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:59.132847 | orchestrator | Monday 26 May 2025 03:45:59 +0000 (0:00:00.176) 0:00:06.775 ************ 2025-05-26 03:45:59.300073 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:59.304371 | orchestrator | 2025-05-26 03:45:59.307326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:59.308776 | orchestrator | Monday 26 May 2025 03:45:59 +0000 (0:00:00.167) 0:00:06.942 ************ 2025-05-26 03:45:59.474595 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:59.474790 | orchestrator | 2025-05-26 03:45:59.477505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:59.477538 | orchestrator | Monday 26 May 2025 03:45:59 +0000 (0:00:00.173) 0:00:07.115 ************ 2025-05-26 03:45:59.669206 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:59.669424 | orchestrator | 2025-05-26 03:45:59.670802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:59.671158 | orchestrator | Monday 26 May 2025 03:45:59 +0000 (0:00:00.195) 0:00:07.311 ************ 2025-05-26 03:45:59.855324 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:45:59.855743 | orchestrator | 2025-05-26 03:45:59.859088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:45:59.860256 | orchestrator | Monday 26 May 2025 03:45:59 +0000 (0:00:00.184) 0:00:07.495 ************ 2025-05-26 03:46:00.981171 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-26 03:46:00.986674 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-26 03:46:00.986718 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-26 03:46:00.989127 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-26 03:46:00.989883 | orchestrator | 2025-05-26 03:46:00.991469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:00.992005 | orchestrator | Monday 26 May 2025 03:46:00 +0000 (0:00:01.127) 0:00:08.622 ************ 2025-05-26 03:46:01.262662 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:01.266407 | orchestrator | 2025-05-26 03:46:01.268696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:01.269633 | orchestrator | Monday 26 May 2025 03:46:01 +0000 (0:00:00.280) 0:00:08.903 ************ 2025-05-26 03:46:01.481211 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:01.482575 | orchestrator | 2025-05-26 03:46:01.488971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:01.490488 | orchestrator | Monday 26 May 2025 03:46:01 +0000 (0:00:00.220) 0:00:09.123 ************ 2025-05-26 03:46:01.721507 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:01.721734 | orchestrator | 2025-05-26 03:46:01.721758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:01.724666 | orchestrator | Monday 26 May 2025 03:46:01 +0000 (0:00:00.236) 0:00:09.360 ************ 2025-05-26 03:46:01.987053 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:01.987129 | orchestrator | 2025-05-26 03:46:01.987466 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-26 03:46:01.989566 | orchestrator | Monday 26 May 2025 03:46:01 +0000 (0:00:00.267) 0:00:09.627 ************ 2025-05-26 03:46:02.183968 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-26 03:46:02.184583 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-26 03:46:02.185557 | orchestrator | 2025-05-26 03:46:02.186580 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-26 03:46:02.187416 | orchestrator | Monday 26 May 2025 03:46:02 +0000 (0:00:00.195) 0:00:09.823 ************ 2025-05-26 03:46:02.329836 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:02.330564 | orchestrator | 2025-05-26 03:46:02.331396 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-26 03:46:02.334108 | orchestrator | Monday 26 May 2025 03:46:02 +0000 (0:00:00.147) 0:00:09.970 ************ 2025-05-26 03:46:02.489138 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:02.489236 | orchestrator | 2025-05-26 03:46:02.492192 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-26 03:46:02.492497 | orchestrator | Monday 26 May 2025 03:46:02 +0000 (0:00:00.156) 0:00:10.127 ************ 2025-05-26 03:46:02.653114 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:02.654237 | orchestrator | 2025-05-26 03:46:02.657128 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-26 03:46:02.658795 | orchestrator | Monday 26 May 2025 03:46:02 +0000 (0:00:00.166) 0:00:10.293 ************ 2025-05-26 03:46:02.816561 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:46:02.817902 | orchestrator | 2025-05-26 03:46:02.818818 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-26 03:46:02.820877 | orchestrator | Monday 26 May 2025 03:46:02 +0000 (0:00:00.161) 0:00:10.455 ************ 2025-05-26 03:46:02.978970 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '308d2e7c-9a7f-5d4d-8709-bdc410450a80'}}) 2025-05-26 03:46:02.979075 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}}) 2025-05-26 03:46:02.979422 | orchestrator | 2025-05-26 03:46:02.979511 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-26 03:46:02.979996 | orchestrator | Monday 26 May 2025 03:46:02 +0000 (0:00:00.165) 0:00:10.620 ************ 2025-05-26 03:46:03.116041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '308d2e7c-9a7f-5d4d-8709-bdc410450a80'}})  2025-05-26 03:46:03.116358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}})  2025-05-26 03:46:03.117386 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:03.117464 | orchestrator | 2025-05-26 03:46:03.117815 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-26 03:46:03.118584 | orchestrator | Monday 26 May 2025 03:46:03 +0000 (0:00:00.137) 0:00:10.758 ************ 2025-05-26 03:46:03.486982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '308d2e7c-9a7f-5d4d-8709-bdc410450a80'}})  2025-05-26 03:46:03.487644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}})  2025-05-26 03:46:03.488242 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:03.490125 | orchestrator | 2025-05-26 03:46:03.491884 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-26 03:46:03.492576 | orchestrator | Monday 26 May 2025 03:46:03 +0000 (0:00:00.371) 0:00:11.129 ************ 2025-05-26 03:46:03.640401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '308d2e7c-9a7f-5d4d-8709-bdc410450a80'}})  2025-05-26 03:46:03.641053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}})  2025-05-26 03:46:03.641596 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:03.642151 | orchestrator | 2025-05-26 03:46:03.643031 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-26 03:46:03.643729 | orchestrator | Monday 26 May 2025 03:46:03 +0000 (0:00:00.151) 0:00:11.281 ************ 2025-05-26 03:46:03.794319 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:46:03.795269 | orchestrator | 2025-05-26 03:46:03.796666 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-26 03:46:03.797365 | orchestrator | Monday 26 May 2025 03:46:03 +0000 (0:00:00.154) 0:00:11.435 ************ 2025-05-26 03:46:03.916239 | orchestrator | ok: [testbed-node-3] 2025-05-26 03:46:03.917011 | orchestrator | 2025-05-26 03:46:03.917531 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-26 03:46:03.918308 | orchestrator | Monday 26 May 2025 03:46:03 +0000 (0:00:00.122) 0:00:11.558 ************ 2025-05-26 03:46:04.057966 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:04.060032 | orchestrator | 2025-05-26 03:46:04.060346 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-26 03:46:04.060778 | orchestrator | Monday 26 May 2025 03:46:04 +0000 (0:00:00.141) 0:00:11.700 ************ 2025-05-26 03:46:04.202211 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:04.202321 | orchestrator | 2025-05-26 03:46:04.202337 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-26 03:46:04.202351 | orchestrator | Monday 26 May 2025 03:46:04 +0000 (0:00:00.139) 0:00:11.839 ************ 2025-05-26 03:46:04.363091 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:04.364094 | orchestrator | 2025-05-26 03:46:04.364668 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-26 03:46:04.365770 | orchestrator | Monday 26 May 2025 03:46:04 +0000 (0:00:00.158) 0:00:11.998 ************ 2025-05-26 03:46:04.513729 | orchestrator | ok: [testbed-node-3] => { 2025-05-26 03:46:04.515379 | orchestrator |  "ceph_osd_devices": { 2025-05-26 03:46:04.515472 | orchestrator |  "sdb": { 2025-05-26 03:46:04.515990 | orchestrator |  "osd_lvm_uuid": "308d2e7c-9a7f-5d4d-8709-bdc410450a80" 2025-05-26 03:46:04.519460 | orchestrator |  }, 2025-05-26 03:46:04.519494 | orchestrator |  "sdc": { 2025-05-26 03:46:04.519506 | orchestrator |  "osd_lvm_uuid": "5dc1dea4-54cd-5a78-85ff-70cfe3c9c560" 2025-05-26 03:46:04.520540 | orchestrator |  } 2025-05-26 03:46:04.522091 | orchestrator |  } 2025-05-26 03:46:04.523020 | orchestrator | } 2025-05-26 03:46:04.523960 | orchestrator | 2025-05-26 03:46:04.526219 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-26 03:46:04.526837 | orchestrator | Monday 26 May 2025 03:46:04 +0000 (0:00:00.153) 0:00:12.152 ************ 2025-05-26 03:46:04.641078 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:04.642174 | orchestrator | 2025-05-26 03:46:04.644990 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-26 03:46:04.645124 | orchestrator | Monday 26 May 2025 03:46:04 +0000 (0:00:00.130) 0:00:12.282 ************ 2025-05-26 03:46:04.761659 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:04.762416 | orchestrator | 2025-05-26 03:46:04.763937 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-26 03:46:04.764974 | orchestrator | Monday 26 May 2025 03:46:04 +0000 (0:00:00.120) 0:00:12.402 ************ 2025-05-26 03:46:04.894366 | orchestrator | skipping: [testbed-node-3] 2025-05-26 03:46:04.897760 | orchestrator | 2025-05-26 03:46:04.897812 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-26 03:46:04.897824 | orchestrator | Monday 26 May 2025 03:46:04 +0000 (0:00:00.132) 0:00:12.535 ************ 2025-05-26 03:46:05.109442 | orchestrator | changed: [testbed-node-3] => { 2025-05-26 03:46:05.110407 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-26 03:46:05.113494 | orchestrator |  "ceph_osd_devices": { 2025-05-26 03:46:05.114117 | orchestrator |  "sdb": { 2025-05-26 03:46:05.114471 | orchestrator |  "osd_lvm_uuid": "308d2e7c-9a7f-5d4d-8709-bdc410450a80" 2025-05-26 03:46:05.115048 | orchestrator |  }, 2025-05-26 03:46:05.115601 | orchestrator |  "sdc": { 2025-05-26 03:46:05.115864 | orchestrator |  "osd_lvm_uuid": "5dc1dea4-54cd-5a78-85ff-70cfe3c9c560" 2025-05-26 03:46:05.116805 | orchestrator |  } 2025-05-26 03:46:05.117269 | orchestrator |  }, 2025-05-26 03:46:05.119485 | orchestrator |  "lvm_volumes": [ 2025-05-26 03:46:05.120039 | orchestrator |  { 2025-05-26 03:46:05.120370 | orchestrator |  "data": "osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80", 2025-05-26 03:46:05.121130 | orchestrator |  "data_vg": "ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80" 2025-05-26 03:46:05.123069 | orchestrator |  }, 2025-05-26 03:46:05.124049 | orchestrator |  { 2025-05-26 03:46:05.125273 | orchestrator |  "data": "osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560", 2025-05-26 03:46:05.126114 | orchestrator |  "data_vg": "ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560" 2025-05-26 03:46:05.127413 | orchestrator |  } 2025-05-26 03:46:05.128266 | orchestrator |  ] 2025-05-26 03:46:05.128980 | orchestrator |  } 2025-05-26 03:46:05.130256 | orchestrator | } 2025-05-26 03:46:05.130807 | orchestrator | 2025-05-26 03:46:05.131877 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-26 03:46:05.132404 | orchestrator | Monday 26 May 2025 03:46:05 +0000 (0:00:00.213) 0:00:12.749 ************ 2025-05-26 03:46:07.282500 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-26 03:46:07.282617 | orchestrator | 2025-05-26 03:46:07.282703 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-26 03:46:07.283239 | orchestrator | 2025-05-26 03:46:07.283284 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-26 03:46:07.284798 | orchestrator | Monday 26 May 2025 03:46:07 +0000 (0:00:02.171) 0:00:14.920 ************ 2025-05-26 03:46:07.535520 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-26 03:46:07.535747 | orchestrator | 2025-05-26 03:46:07.539209 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-26 03:46:07.539267 | orchestrator | Monday 26 May 2025 03:46:07 +0000 (0:00:00.257) 0:00:15.178 ************ 2025-05-26 03:46:07.791114 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:46:07.791713 | orchestrator | 2025-05-26 03:46:07.792180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:07.798727 | orchestrator | Monday 26 May 2025 03:46:07 +0000 (0:00:00.254) 0:00:15.432 ************ 2025-05-26 03:46:08.220530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-26 03:46:08.223494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-26 03:46:08.225186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-26 03:46:08.226347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-26 03:46:08.227812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-26 03:46:08.230851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-26 03:46:08.230904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-26 03:46:08.233865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-26 03:46:08.234148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-26 03:46:08.239894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-26 03:46:08.242014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-26 03:46:08.243171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-26 03:46:08.243755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-26 03:46:08.243829 | orchestrator | 2025-05-26 03:46:08.244137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:08.245985 | orchestrator | Monday 26 May 2025 03:46:08 +0000 (0:00:00.425) 0:00:15.857 ************ 2025-05-26 03:46:08.627559 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:08.629171 | orchestrator | 2025-05-26 03:46:08.635018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:08.635077 | orchestrator | Monday 26 May 2025 03:46:08 +0000 (0:00:00.404) 0:00:16.262 ************ 2025-05-26 03:46:08.868137 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:08.868630 | orchestrator | 2025-05-26 03:46:08.872351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:08.877718 | orchestrator | Monday 26 May 2025 03:46:08 +0000 (0:00:00.247) 0:00:16.509 ************ 2025-05-26 03:46:09.231274 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:09.234220 | orchestrator | 2025-05-26 03:46:09.235599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:09.235633 | orchestrator | Monday 26 May 2025 03:46:09 +0000 (0:00:00.364) 0:00:16.873 ************ 2025-05-26 03:46:09.441485 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:09.443785 | orchestrator | 2025-05-26 03:46:09.445192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:09.445232 | orchestrator | Monday 26 May 2025 03:46:09 +0000 (0:00:00.210) 0:00:17.083 ************ 2025-05-26 03:46:10.096430 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:10.096693 | orchestrator | 2025-05-26 03:46:10.097832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:10.098573 | orchestrator | Monday 26 May 2025 03:46:10 +0000 (0:00:00.653) 0:00:17.737 ************ 2025-05-26 03:46:10.269749 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:10.270117 | orchestrator | 2025-05-26 03:46:10.271308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:10.272123 | orchestrator | Monday 26 May 2025 03:46:10 +0000 (0:00:00.175) 0:00:17.912 ************ 2025-05-26 03:46:10.466223 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:10.467521 | orchestrator | 2025-05-26 03:46:10.467554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:10.467561 | orchestrator | Monday 26 May 2025 03:46:10 +0000 (0:00:00.195) 0:00:18.107 ************ 2025-05-26 03:46:10.658628 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:10.659145 | orchestrator | 2025-05-26 03:46:10.661024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:10.661085 | orchestrator | Monday 26 May 2025 03:46:10 +0000 (0:00:00.188) 0:00:18.296 ************ 2025-05-26 03:46:11.059525 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_abaa748c-97b3-4e70-8935-2e6927d8d198) 2025-05-26 03:46:11.060475 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_abaa748c-97b3-4e70-8935-2e6927d8d198) 2025-05-26 03:46:11.062349 | orchestrator | 2025-05-26 03:46:11.062590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:11.062996 | orchestrator | Monday 26 May 2025 03:46:11 +0000 (0:00:00.402) 0:00:18.698 ************ 2025-05-26 03:46:11.501218 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a2c1486d-cd17-4e79-bfde-447100a0feef) 2025-05-26 03:46:11.501382 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a2c1486d-cd17-4e79-bfde-447100a0feef) 2025-05-26 03:46:11.501659 | orchestrator | 2025-05-26 03:46:11.502497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:11.502533 | orchestrator | Monday 26 May 2025 03:46:11 +0000 (0:00:00.445) 0:00:19.143 ************ 2025-05-26 03:46:11.929332 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b8fa87d6-4bbf-4e23-9059-3efb42beefcf) 2025-05-26 03:46:11.930174 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b8fa87d6-4bbf-4e23-9059-3efb42beefcf) 2025-05-26 03:46:11.931000 | orchestrator | 2025-05-26 03:46:11.934836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:11.935073 | orchestrator | Monday 26 May 2025 03:46:11 +0000 (0:00:00.428) 0:00:19.572 ************ 2025-05-26 03:46:12.322575 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2a6da8ab-439b-4c92-86f2-b8912a630d10) 2025-05-26 03:46:12.322789 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2a6da8ab-439b-4c92-86f2-b8912a630d10) 2025-05-26 03:46:12.323477 | orchestrator | 2025-05-26 03:46:12.323536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:12.324004 | orchestrator | Monday 26 May 2025 03:46:12 +0000 (0:00:00.391) 0:00:19.963 ************ 2025-05-26 03:46:12.615560 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-26 03:46:12.616121 | orchestrator | 2025-05-26 03:46:12.617606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:12.618100 | orchestrator | Monday 26 May 2025 03:46:12 +0000 (0:00:00.295) 0:00:20.259 ************ 2025-05-26 03:46:12.946765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-26 03:46:12.952025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-26 03:46:12.955121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-26 03:46:12.956514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-26 03:46:12.956946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-26 03:46:12.957547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-26 03:46:12.958616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-26 03:46:12.959102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-26 03:46:12.960394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-26 03:46:12.960886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-26 03:46:12.962795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-26 03:46:12.964197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-26 03:46:12.964484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-26 03:46:12.964887 | orchestrator | 2025-05-26 03:46:12.965258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:12.965620 | orchestrator | Monday 26 May 2025 03:46:12 +0000 (0:00:00.329) 0:00:20.588 ************ 2025-05-26 03:46:13.133598 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:13.135969 | orchestrator | 2025-05-26 03:46:13.136776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:13.137616 | orchestrator | Monday 26 May 2025 03:46:13 +0000 (0:00:00.188) 0:00:20.777 ************ 2025-05-26 03:46:13.588629 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:13.589163 | orchestrator | 2025-05-26 03:46:13.590521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:13.591849 | orchestrator | Monday 26 May 2025 03:46:13 +0000 (0:00:00.454) 0:00:21.231 ************ 2025-05-26 03:46:13.775584 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:13.775686 | orchestrator | 2025-05-26 03:46:13.779402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:13.779476 | orchestrator | Monday 26 May 2025 03:46:13 +0000 (0:00:00.186) 0:00:21.418 ************ 2025-05-26 03:46:13.952826 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:13.952996 | orchestrator | 2025-05-26 03:46:13.953177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:13.953211 | orchestrator | Monday 26 May 2025 03:46:13 +0000 (0:00:00.177) 0:00:21.596 ************ 2025-05-26 03:46:14.138427 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:14.146230 | orchestrator | 2025-05-26 03:46:14.146456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:14.146984 | orchestrator | Monday 26 May 2025 03:46:14 +0000 (0:00:00.184) 0:00:21.780 ************ 2025-05-26 03:46:14.318164 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:14.318306 | orchestrator | 2025-05-26 03:46:14.318477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:14.320175 | orchestrator | Monday 26 May 2025 03:46:14 +0000 (0:00:00.180) 0:00:21.961 ************ 2025-05-26 03:46:14.488848 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:14.490287 | orchestrator | 2025-05-26 03:46:14.494868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:14.495921 | orchestrator | Monday 26 May 2025 03:46:14 +0000 (0:00:00.169) 0:00:22.131 ************ 2025-05-26 03:46:14.664235 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:14.665256 | orchestrator | 2025-05-26 03:46:14.668388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:14.670909 | orchestrator | Monday 26 May 2025 03:46:14 +0000 (0:00:00.175) 0:00:22.306 ************ 2025-05-26 03:46:15.234838 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-26 03:46:15.236328 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-26 03:46:15.239561 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-26 03:46:15.240380 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-26 03:46:15.241472 | orchestrator | 2025-05-26 03:46:15.244739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:15.245620 | orchestrator | Monday 26 May 2025 03:46:15 +0000 (0:00:00.570) 0:00:22.877 ************ 2025-05-26 03:46:15.414636 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:15.415252 | orchestrator | 2025-05-26 03:46:15.415293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:15.415307 | orchestrator | Monday 26 May 2025 03:46:15 +0000 (0:00:00.180) 0:00:23.057 ************ 2025-05-26 03:46:15.595136 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:15.595264 | orchestrator | 2025-05-26 03:46:15.596937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:15.597391 | orchestrator | Monday 26 May 2025 03:46:15 +0000 (0:00:00.179) 0:00:23.236 ************ 2025-05-26 03:46:15.774593 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:15.776606 | orchestrator | 2025-05-26 03:46:15.777320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:15.778176 | orchestrator | Monday 26 May 2025 03:46:15 +0000 (0:00:00.177) 0:00:23.414 ************ 2025-05-26 03:46:15.942985 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:15.943192 | orchestrator | 2025-05-26 03:46:15.944134 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-26 03:46:15.944166 | orchestrator | Monday 26 May 2025 03:46:15 +0000 (0:00:00.172) 0:00:23.586 ************ 2025-05-26 03:46:16.210551 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-26 03:46:16.211473 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-26 03:46:16.212672 | orchestrator | 2025-05-26 03:46:16.217206 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-26 03:46:16.217597 | orchestrator | Monday 26 May 2025 03:46:16 +0000 (0:00:00.267) 0:00:23.854 ************ 2025-05-26 03:46:16.337538 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:16.338308 | orchestrator | 2025-05-26 03:46:16.338919 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-26 03:46:16.339514 | orchestrator | Monday 26 May 2025 03:46:16 +0000 (0:00:00.123) 0:00:23.977 ************ 2025-05-26 03:46:16.461032 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:16.461125 | orchestrator | 2025-05-26 03:46:16.461810 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-26 03:46:16.462110 | orchestrator | Monday 26 May 2025 03:46:16 +0000 (0:00:00.126) 0:00:24.103 ************ 2025-05-26 03:46:16.576321 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:16.578321 | orchestrator | 2025-05-26 03:46:16.579415 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-26 03:46:16.582124 | orchestrator | Monday 26 May 2025 03:46:16 +0000 (0:00:00.113) 0:00:24.217 ************ 2025-05-26 03:46:16.695657 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:46:16.696884 | orchestrator | 2025-05-26 03:46:16.697359 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-26 03:46:16.698186 | orchestrator | Monday 26 May 2025 03:46:16 +0000 (0:00:00.120) 0:00:24.337 ************ 2025-05-26 03:46:16.848745 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}}) 2025-05-26 03:46:16.849421 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b512bc6-244a-59a0-9a87-47140e1f057d'}}) 2025-05-26 03:46:16.850605 | orchestrator | 2025-05-26 03:46:16.851889 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-26 03:46:16.857444 | orchestrator | Monday 26 May 2025 03:46:16 +0000 (0:00:00.153) 0:00:24.491 ************ 2025-05-26 03:46:16.996727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}})  2025-05-26 03:46:16.996803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b512bc6-244a-59a0-9a87-47140e1f057d'}})  2025-05-26 03:46:16.996808 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:16.997528 | orchestrator | 2025-05-26 03:46:16.998389 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-26 03:46:16.999599 | orchestrator | Monday 26 May 2025 03:46:16 +0000 (0:00:00.140) 0:00:24.632 ************ 2025-05-26 03:46:17.113122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}})  2025-05-26 03:46:17.115454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b512bc6-244a-59a0-9a87-47140e1f057d'}})  2025-05-26 03:46:17.116015 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:17.116728 | orchestrator | 2025-05-26 03:46:17.117396 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-26 03:46:17.117903 | orchestrator | Monday 26 May 2025 03:46:17 +0000 (0:00:00.121) 0:00:24.753 ************ 2025-05-26 03:46:17.247082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}})  2025-05-26 03:46:17.247586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b512bc6-244a-59a0-9a87-47140e1f057d'}})  2025-05-26 03:46:17.249531 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:17.251033 | orchestrator | 2025-05-26 03:46:17.253868 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-26 03:46:17.254456 | orchestrator | Monday 26 May 2025 03:46:17 +0000 (0:00:00.134) 0:00:24.888 ************ 2025-05-26 03:46:17.367678 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:46:17.367783 | orchestrator | 2025-05-26 03:46:17.368073 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-26 03:46:17.369173 | orchestrator | Monday 26 May 2025 03:46:17 +0000 (0:00:00.120) 0:00:25.009 ************ 2025-05-26 03:46:17.485278 | orchestrator | ok: [testbed-node-4] 2025-05-26 03:46:17.486399 | orchestrator | 2025-05-26 03:46:17.490351 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-26 03:46:17.491106 | orchestrator | Monday 26 May 2025 03:46:17 +0000 (0:00:00.118) 0:00:25.127 ************ 2025-05-26 03:46:17.594299 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:17.595718 | orchestrator | 2025-05-26 03:46:17.600421 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-26 03:46:17.601137 | orchestrator | Monday 26 May 2025 03:46:17 +0000 (0:00:00.107) 0:00:25.235 ************ 2025-05-26 03:46:17.909020 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:17.912873 | orchestrator | 2025-05-26 03:46:17.913894 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-26 03:46:17.914714 | orchestrator | Monday 26 May 2025 03:46:17 +0000 (0:00:00.310) 0:00:25.545 ************ 2025-05-26 03:46:18.040096 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:18.040189 | orchestrator | 2025-05-26 03:46:18.040200 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-26 03:46:18.040665 | orchestrator | Monday 26 May 2025 03:46:18 +0000 (0:00:00.132) 0:00:25.678 ************ 2025-05-26 03:46:18.176092 | orchestrator | ok: [testbed-node-4] => { 2025-05-26 03:46:18.177205 | orchestrator |  "ceph_osd_devices": { 2025-05-26 03:46:18.178367 | orchestrator |  "sdb": { 2025-05-26 03:46:18.179656 | orchestrator |  "osd_lvm_uuid": "8ec7e06f-bb0b-5d64-9f74-70f52e848cb7" 2025-05-26 03:46:18.180874 | orchestrator |  }, 2025-05-26 03:46:18.182122 | orchestrator |  "sdc": { 2025-05-26 03:46:18.184596 | orchestrator |  "osd_lvm_uuid": "4b512bc6-244a-59a0-9a87-47140e1f057d" 2025-05-26 03:46:18.185360 | orchestrator |  } 2025-05-26 03:46:18.186173 | orchestrator |  } 2025-05-26 03:46:18.188323 | orchestrator | } 2025-05-26 03:46:18.188799 | orchestrator | 2025-05-26 03:46:18.189360 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-26 03:46:18.189858 | orchestrator | Monday 26 May 2025 03:46:18 +0000 (0:00:00.139) 0:00:25.818 ************ 2025-05-26 03:46:18.323340 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:18.323438 | orchestrator | 2025-05-26 03:46:18.323905 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-26 03:46:18.326140 | orchestrator | Monday 26 May 2025 03:46:18 +0000 (0:00:00.143) 0:00:25.961 ************ 2025-05-26 03:46:18.444093 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:18.444731 | orchestrator | 2025-05-26 03:46:18.446087 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-26 03:46:18.447274 | orchestrator | Monday 26 May 2025 03:46:18 +0000 (0:00:00.123) 0:00:26.085 ************ 2025-05-26 03:46:18.567361 | orchestrator | skipping: [testbed-node-4] 2025-05-26 03:46:18.568012 | orchestrator | 2025-05-26 03:46:18.569758 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-26 03:46:18.570880 | orchestrator | Monday 26 May 2025 03:46:18 +0000 (0:00:00.123) 0:00:26.208 ************ 2025-05-26 03:46:18.772490 | orchestrator | changed: [testbed-node-4] => { 2025-05-26 03:46:18.772695 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-26 03:46:18.774310 | orchestrator |  "ceph_osd_devices": { 2025-05-26 03:46:18.774453 | orchestrator |  "sdb": { 2025-05-26 03:46:18.774732 | orchestrator |  "osd_lvm_uuid": "8ec7e06f-bb0b-5d64-9f74-70f52e848cb7" 2025-05-26 03:46:18.777118 | orchestrator |  }, 2025-05-26 03:46:18.777155 | orchestrator |  "sdc": { 2025-05-26 03:46:18.777167 | orchestrator |  "osd_lvm_uuid": "4b512bc6-244a-59a0-9a87-47140e1f057d" 2025-05-26 03:46:18.777567 | orchestrator |  } 2025-05-26 03:46:18.778785 | orchestrator |  }, 2025-05-26 03:46:18.779088 | orchestrator |  "lvm_volumes": [ 2025-05-26 03:46:18.780744 | orchestrator |  { 2025-05-26 03:46:18.780886 | orchestrator |  "data": "osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7", 2025-05-26 03:46:18.781064 | orchestrator |  "data_vg": "ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7" 2025-05-26 03:46:18.781307 | orchestrator |  }, 2025-05-26 03:46:18.783032 | orchestrator |  { 2025-05-26 03:46:18.783134 | orchestrator |  "data": "osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d", 2025-05-26 03:46:18.783369 | orchestrator |  "data_vg": "ceph-4b512bc6-244a-59a0-9a87-47140e1f057d" 2025-05-26 03:46:18.783601 | orchestrator |  } 2025-05-26 03:46:18.783840 | orchestrator |  ] 2025-05-26 03:46:18.784133 | orchestrator |  } 2025-05-26 03:46:18.785465 | orchestrator | } 2025-05-26 03:46:18.785691 | orchestrator | 2025-05-26 03:46:18.785921 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-26 03:46:18.786160 | orchestrator | Monday 26 May 2025 03:46:18 +0000 (0:00:00.205) 0:00:26.414 ************ 2025-05-26 03:46:19.668240 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-26 03:46:19.668347 | orchestrator | 2025-05-26 03:46:19.668363 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-26 03:46:19.668471 | orchestrator | 2025-05-26 03:46:19.668531 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-26 03:46:19.669101 | orchestrator | Monday 26 May 2025 03:46:19 +0000 (0:00:00.897) 0:00:27.311 ************ 2025-05-26 03:46:20.048000 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-26 03:46:20.048108 | orchestrator | 2025-05-26 03:46:20.048186 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-26 03:46:20.048421 | orchestrator | Monday 26 May 2025 03:46:20 +0000 (0:00:00.376) 0:00:27.687 ************ 2025-05-26 03:46:20.573282 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:46:20.574092 | orchestrator | 2025-05-26 03:46:20.578481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:20.579484 | orchestrator | Monday 26 May 2025 03:46:20 +0000 (0:00:00.527) 0:00:28.215 ************ 2025-05-26 03:46:20.910171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-26 03:46:20.914104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-26 03:46:20.915839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-26 03:46:20.917484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-26 03:46:20.918798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-26 03:46:20.920313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-26 03:46:20.922458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-26 03:46:20.924142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-26 03:46:20.925306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-26 03:46:20.927079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-26 03:46:20.927815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-26 03:46:20.928965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-26 03:46:20.930002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-26 03:46:20.931244 | orchestrator | 2025-05-26 03:46:20.932058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:20.932718 | orchestrator | Monday 26 May 2025 03:46:20 +0000 (0:00:00.336) 0:00:28.551 ************ 2025-05-26 03:46:21.103820 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:21.104072 | orchestrator | 2025-05-26 03:46:21.104676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:21.105056 | orchestrator | Monday 26 May 2025 03:46:21 +0000 (0:00:00.195) 0:00:28.747 ************ 2025-05-26 03:46:21.289813 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:21.294113 | orchestrator | 2025-05-26 03:46:21.294244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:21.295211 | orchestrator | Monday 26 May 2025 03:46:21 +0000 (0:00:00.184) 0:00:28.931 ************ 2025-05-26 03:46:21.482758 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:21.483013 | orchestrator | 2025-05-26 03:46:21.483606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:21.484208 | orchestrator | Monday 26 May 2025 03:46:21 +0000 (0:00:00.193) 0:00:29.125 ************ 2025-05-26 03:46:21.659330 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:21.659867 | orchestrator | 2025-05-26 03:46:21.663356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:21.665040 | orchestrator | Monday 26 May 2025 03:46:21 +0000 (0:00:00.176) 0:00:29.301 ************ 2025-05-26 03:46:21.841740 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:21.843739 | orchestrator | 2025-05-26 03:46:21.843774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:21.843788 | orchestrator | Monday 26 May 2025 03:46:21 +0000 (0:00:00.180) 0:00:29.482 ************ 2025-05-26 03:46:22.003849 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:22.004461 | orchestrator | 2025-05-26 03:46:22.005094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:22.006737 | orchestrator | Monday 26 May 2025 03:46:21 +0000 (0:00:00.163) 0:00:29.645 ************ 2025-05-26 03:46:22.176083 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:22.176490 | orchestrator | 2025-05-26 03:46:22.177067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:22.181086 | orchestrator | Monday 26 May 2025 03:46:22 +0000 (0:00:00.171) 0:00:29.817 ************ 2025-05-26 03:46:22.359980 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:22.365473 | orchestrator | 2025-05-26 03:46:22.366096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:22.366585 | orchestrator | Monday 26 May 2025 03:46:22 +0000 (0:00:00.185) 0:00:30.002 ************ 2025-05-26 03:46:22.863094 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a22053f6-7fcf-48d3-9817-9fbbcd6d287f) 2025-05-26 03:46:22.866725 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a22053f6-7fcf-48d3-9817-9fbbcd6d287f) 2025-05-26 03:46:22.866799 | orchestrator | 2025-05-26 03:46:22.870960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:22.871714 | orchestrator | Monday 26 May 2025 03:46:22 +0000 (0:00:00.503) 0:00:30.505 ************ 2025-05-26 03:46:23.685462 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8267a69f-7007-4a62-b03d-616d3aa09f53) 2025-05-26 03:46:23.687190 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8267a69f-7007-4a62-b03d-616d3aa09f53) 2025-05-26 03:46:23.690187 | orchestrator | 2025-05-26 03:46:23.691717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:23.696181 | orchestrator | Monday 26 May 2025 03:46:23 +0000 (0:00:00.819) 0:00:31.325 ************ 2025-05-26 03:46:24.098601 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_21cb62ce-763a-41a7-95e4-caebeb5b0a4b) 2025-05-26 03:46:24.099817 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_21cb62ce-763a-41a7-95e4-caebeb5b0a4b) 2025-05-26 03:46:24.101072 | orchestrator | 2025-05-26 03:46:24.105285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:24.105588 | orchestrator | Monday 26 May 2025 03:46:24 +0000 (0:00:00.415) 0:00:31.740 ************ 2025-05-26 03:46:24.521981 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ae6d7dd5-5925-42d7-939c-6a68dbf2df83) 2025-05-26 03:46:24.523004 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ae6d7dd5-5925-42d7-939c-6a68dbf2df83) 2025-05-26 03:46:24.523683 | orchestrator | 2025-05-26 03:46:24.524575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 03:46:24.525364 | orchestrator | Monday 26 May 2025 03:46:24 +0000 (0:00:00.423) 0:00:32.163 ************ 2025-05-26 03:46:24.849595 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-26 03:46:24.851811 | orchestrator | 2025-05-26 03:46:24.854754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:24.854814 | orchestrator | Monday 26 May 2025 03:46:24 +0000 (0:00:00.326) 0:00:32.490 ************ 2025-05-26 03:46:25.219400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-26 03:46:25.220045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-26 03:46:25.220381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-26 03:46:25.222557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-26 03:46:25.223081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-26 03:46:25.224045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-26 03:46:25.225361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-26 03:46:25.227310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-26 03:46:25.228291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-26 03:46:25.229055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-26 03:46:25.229837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-26 03:46:25.230326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-26 03:46:25.231323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-26 03:46:25.232891 | orchestrator | 2025-05-26 03:46:25.233458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:25.234179 | orchestrator | Monday 26 May 2025 03:46:25 +0000 (0:00:00.371) 0:00:32.861 ************ 2025-05-26 03:46:25.426279 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:25.427603 | orchestrator | 2025-05-26 03:46:25.430220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:25.430606 | orchestrator | Monday 26 May 2025 03:46:25 +0000 (0:00:00.204) 0:00:33.065 ************ 2025-05-26 03:46:25.652217 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:25.652451 | orchestrator | 2025-05-26 03:46:25.653707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:25.655198 | orchestrator | Monday 26 May 2025 03:46:25 +0000 (0:00:00.227) 0:00:33.293 ************ 2025-05-26 03:46:25.871168 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:25.871897 | orchestrator | 2025-05-26 03:46:25.873627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:25.874840 | orchestrator | Monday 26 May 2025 03:46:25 +0000 (0:00:00.218) 0:00:33.511 ************ 2025-05-26 03:46:26.078796 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:26.079148 | orchestrator | 2025-05-26 03:46:26.080020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:26.081876 | orchestrator | Monday 26 May 2025 03:46:26 +0000 (0:00:00.208) 0:00:33.719 ************ 2025-05-26 03:46:26.285631 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:26.285890 | orchestrator | 2025-05-26 03:46:26.286754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:26.287069 | orchestrator | Monday 26 May 2025 03:46:26 +0000 (0:00:00.208) 0:00:33.928 ************ 2025-05-26 03:46:26.913504 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:26.913972 | orchestrator | 2025-05-26 03:46:26.915068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:26.915697 | orchestrator | Monday 26 May 2025 03:46:26 +0000 (0:00:00.626) 0:00:34.554 ************ 2025-05-26 03:46:27.113828 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:27.114745 | orchestrator | 2025-05-26 03:46:27.116349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:27.118088 | orchestrator | Monday 26 May 2025 03:46:27 +0000 (0:00:00.201) 0:00:34.755 ************ 2025-05-26 03:46:27.319071 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:27.320238 | orchestrator | 2025-05-26 03:46:27.321040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:27.322114 | orchestrator | Monday 26 May 2025 03:46:27 +0000 (0:00:00.204) 0:00:34.960 ************ 2025-05-26 03:46:27.957728 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-26 03:46:27.958137 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-26 03:46:27.958622 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-26 03:46:27.959212 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-26 03:46:27.960049 | orchestrator | 2025-05-26 03:46:27.960672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:27.961300 | orchestrator | Monday 26 May 2025 03:46:27 +0000 (0:00:00.638) 0:00:35.598 ************ 2025-05-26 03:46:28.160285 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:28.160372 | orchestrator | 2025-05-26 03:46:28.160686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:28.161153 | orchestrator | Monday 26 May 2025 03:46:28 +0000 (0:00:00.199) 0:00:35.798 ************ 2025-05-26 03:46:28.357375 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:28.357549 | orchestrator | 2025-05-26 03:46:28.358928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:28.360121 | orchestrator | Monday 26 May 2025 03:46:28 +0000 (0:00:00.199) 0:00:35.997 ************ 2025-05-26 03:46:28.576053 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:28.576144 | orchestrator | 2025-05-26 03:46:28.576154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 03:46:28.576162 | orchestrator | Monday 26 May 2025 03:46:28 +0000 (0:00:00.217) 0:00:36.215 ************ 2025-05-26 03:46:28.745122 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:28.745804 | orchestrator | 2025-05-26 03:46:28.746560 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-26 03:46:28.747230 | orchestrator | Monday 26 May 2025 03:46:28 +0000 (0:00:00.172) 0:00:36.388 ************ 2025-05-26 03:46:28.934649 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-26 03:46:28.935187 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-26 03:46:28.936320 | orchestrator | 2025-05-26 03:46:28.937573 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-26 03:46:28.938631 | orchestrator | Monday 26 May 2025 03:46:28 +0000 (0:00:00.187) 0:00:36.575 ************ 2025-05-26 03:46:29.067622 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:29.069247 | orchestrator | 2025-05-26 03:46:29.069631 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-26 03:46:29.071337 | orchestrator | Monday 26 May 2025 03:46:29 +0000 (0:00:00.134) 0:00:36.710 ************ 2025-05-26 03:46:29.198301 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:29.200991 | orchestrator | 2025-05-26 03:46:29.204782 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-26 03:46:29.206797 | orchestrator | Monday 26 May 2025 03:46:29 +0000 (0:00:00.129) 0:00:36.840 ************ 2025-05-26 03:46:29.327291 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:29.328022 | orchestrator | 2025-05-26 03:46:29.329985 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-26 03:46:29.331286 | orchestrator | Monday 26 May 2025 03:46:29 +0000 (0:00:00.128) 0:00:36.968 ************ 2025-05-26 03:46:29.652582 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:46:29.653510 | orchestrator | 2025-05-26 03:46:29.655230 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-26 03:46:29.656210 | orchestrator | Monday 26 May 2025 03:46:29 +0000 (0:00:00.325) 0:00:37.294 ************ 2025-05-26 03:46:29.814303 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2271cd7c-c83a-5004-8392-4222139fb32e'}}) 2025-05-26 03:46:29.814815 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd953f63c-8039-5fa8-9cb1-6d3fed502880'}}) 2025-05-26 03:46:29.815985 | orchestrator | 2025-05-26 03:46:29.816916 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-26 03:46:29.819427 | orchestrator | Monday 26 May 2025 03:46:29 +0000 (0:00:00.160) 0:00:37.454 ************ 2025-05-26 03:46:29.972510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2271cd7c-c83a-5004-8392-4222139fb32e'}})  2025-05-26 03:46:29.973130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd953f63c-8039-5fa8-9cb1-6d3fed502880'}})  2025-05-26 03:46:29.975161 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:29.975750 | orchestrator | 2025-05-26 03:46:29.976902 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-26 03:46:29.977592 | orchestrator | Monday 26 May 2025 03:46:29 +0000 (0:00:00.158) 0:00:37.613 ************ 2025-05-26 03:46:30.117728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2271cd7c-c83a-5004-8392-4222139fb32e'}})  2025-05-26 03:46:30.121113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd953f63c-8039-5fa8-9cb1-6d3fed502880'}})  2025-05-26 03:46:30.121163 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:30.122723 | orchestrator | 2025-05-26 03:46:30.124369 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-26 03:46:30.128009 | orchestrator | Monday 26 May 2025 03:46:30 +0000 (0:00:00.145) 0:00:37.759 ************ 2025-05-26 03:46:30.273526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2271cd7c-c83a-5004-8392-4222139fb32e'}})  2025-05-26 03:46:30.277689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd953f63c-8039-5fa8-9cb1-6d3fed502880'}})  2025-05-26 03:46:30.278532 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:30.279047 | orchestrator | 2025-05-26 03:46:30.279414 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-26 03:46:30.280380 | orchestrator | Monday 26 May 2025 03:46:30 +0000 (0:00:00.153) 0:00:37.912 ************ 2025-05-26 03:46:30.447434 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:46:30.449042 | orchestrator | 2025-05-26 03:46:30.450248 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-26 03:46:30.450633 | orchestrator | Monday 26 May 2025 03:46:30 +0000 (0:00:00.175) 0:00:38.088 ************ 2025-05-26 03:46:30.597636 | orchestrator | ok: [testbed-node-5] 2025-05-26 03:46:30.598926 | orchestrator | 2025-05-26 03:46:30.601290 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-26 03:46:30.601401 | orchestrator | Monday 26 May 2025 03:46:30 +0000 (0:00:00.150) 0:00:38.239 ************ 2025-05-26 03:46:30.754354 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:30.755463 | orchestrator | 2025-05-26 03:46:30.756296 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-26 03:46:30.757077 | orchestrator | Monday 26 May 2025 03:46:30 +0000 (0:00:00.156) 0:00:38.395 ************ 2025-05-26 03:46:30.893822 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:30.894373 | orchestrator | 2025-05-26 03:46:30.895122 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-26 03:46:30.895934 | orchestrator | Monday 26 May 2025 03:46:30 +0000 (0:00:00.140) 0:00:38.536 ************ 2025-05-26 03:46:31.032672 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:31.034890 | orchestrator | 2025-05-26 03:46:31.036614 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-26 03:46:31.037842 | orchestrator | Monday 26 May 2025 03:46:31 +0000 (0:00:00.136) 0:00:38.672 ************ 2025-05-26 03:46:31.180118 | orchestrator | ok: [testbed-node-5] => { 2025-05-26 03:46:31.180984 | orchestrator |  "ceph_osd_devices": { 2025-05-26 03:46:31.181997 | orchestrator |  "sdb": { 2025-05-26 03:46:31.183619 | orchestrator |  "osd_lvm_uuid": "2271cd7c-c83a-5004-8392-4222139fb32e" 2025-05-26 03:46:31.184017 | orchestrator |  }, 2025-05-26 03:46:31.184691 | orchestrator |  "sdc": { 2025-05-26 03:46:31.185560 | orchestrator |  "osd_lvm_uuid": "d953f63c-8039-5fa8-9cb1-6d3fed502880" 2025-05-26 03:46:31.186084 | orchestrator |  } 2025-05-26 03:46:31.186659 | orchestrator |  } 2025-05-26 03:46:31.186997 | orchestrator | } 2025-05-26 03:46:31.187811 | orchestrator | 2025-05-26 03:46:31.188367 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-26 03:46:31.188960 | orchestrator | Monday 26 May 2025 03:46:31 +0000 (0:00:00.148) 0:00:38.821 ************ 2025-05-26 03:46:31.318278 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:31.319045 | orchestrator | 2025-05-26 03:46:31.319909 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-26 03:46:31.320725 | orchestrator | Monday 26 May 2025 03:46:31 +0000 (0:00:00.138) 0:00:38.959 ************ 2025-05-26 03:46:31.636773 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:31.637211 | orchestrator | 2025-05-26 03:46:31.638601 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-26 03:46:31.640271 | orchestrator | Monday 26 May 2025 03:46:31 +0000 (0:00:00.318) 0:00:39.278 ************ 2025-05-26 03:46:31.772238 | orchestrator | skipping: [testbed-node-5] 2025-05-26 03:46:31.772721 | orchestrator | 2025-05-26 03:46:31.773751 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-26 03:46:31.775879 | orchestrator | Monday 26 May 2025 03:46:31 +0000 (0:00:00.135) 0:00:39.414 ************ 2025-05-26 03:46:31.973315 | orchestrator | changed: [testbed-node-5] => { 2025-05-26 03:46:31.974117 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-26 03:46:31.974852 | orchestrator |  "ceph_osd_devices": { 2025-05-26 03:46:31.977593 | orchestrator |  "sdb": { 2025-05-26 03:46:31.977607 | orchestrator |  "osd_lvm_uuid": "2271cd7c-c83a-5004-8392-4222139fb32e" 2025-05-26 03:46:31.977864 | orchestrator |  }, 2025-05-26 03:46:31.978815 | orchestrator |  "sdc": { 2025-05-26 03:46:31.978915 | orchestrator |  "osd_lvm_uuid": "d953f63c-8039-5fa8-9cb1-6d3fed502880" 2025-05-26 03:46:31.979747 | orchestrator |  } 2025-05-26 03:46:31.980198 | orchestrator |  }, 2025-05-26 03:46:31.980685 | orchestrator |  "lvm_volumes": [ 2025-05-26 03:46:31.981136 | orchestrator |  { 2025-05-26 03:46:31.981544 | orchestrator |  "data": "osd-block-2271cd7c-c83a-5004-8392-4222139fb32e", 2025-05-26 03:46:31.982132 | orchestrator |  "data_vg": "ceph-2271cd7c-c83a-5004-8392-4222139fb32e" 2025-05-26 03:46:31.982507 | orchestrator |  }, 2025-05-26 03:46:31.983054 | orchestrator |  { 2025-05-26 03:46:31.983546 | orchestrator |  "data": "osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880", 2025-05-26 03:46:31.983839 | orchestrator |  "data_vg": "ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880" 2025-05-26 03:46:31.984374 | orchestrator |  } 2025-05-26 03:46:31.984652 | orchestrator |  ] 2025-05-26 03:46:31.985283 | orchestrator |  } 2025-05-26 03:46:31.985571 | orchestrator | } 2025-05-26 03:46:31.986002 | orchestrator | 2025-05-26 03:46:31.986408 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-26 03:46:31.986840 | orchestrator | Monday 26 May 2025 03:46:31 +0000 (0:00:00.199) 0:00:39.614 ************ 2025-05-26 03:46:32.926381 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-26 03:46:32.926881 | orchestrator | 2025-05-26 03:46:32.928020 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 03:46:32.928107 | orchestrator | 2025-05-26 03:46:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 03:46:32.928382 | orchestrator | 2025-05-26 03:46:32 | INFO  | Please wait and do not abort execution. 2025-05-26 03:46:32.929729 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-26 03:46:32.930565 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-26 03:46:32.931803 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-26 03:46:32.932719 | orchestrator | 2025-05-26 03:46:32.933486 | orchestrator | 2025-05-26 03:46:32.934518 | orchestrator | 2025-05-26 03:46:32.935362 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 03:46:32.936268 | orchestrator | Monday 26 May 2025 03:46:32 +0000 (0:00:00.953) 0:00:40.567 ************ 2025-05-26 03:46:32.936528 | orchestrator | =============================================================================== 2025-05-26 03:46:32.937241 | orchestrator | Write configuration file ------------------------------------------------ 4.02s 2025-05-26 03:46:32.938077 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-05-26 03:46:32.938477 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-05-26 03:46:32.938999 | orchestrator | Get initial list of available block devices ----------------------------- 1.01s 2025-05-26 03:46:32.939556 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-05-26 03:46:32.940109 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.89s 2025-05-26 03:46:32.940644 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-26 03:46:32.941348 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-05-26 03:46:32.941607 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.65s 2025-05-26 03:46:32.942098 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-26 03:46:32.942525 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.64s 2025-05-26 03:46:32.942807 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-26 03:46:32.943387 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-05-26 03:46:32.943665 | orchestrator | Print configuration data ------------------------------------------------ 0.62s 2025-05-26 03:46:32.944170 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.61s 2025-05-26 03:46:32.944501 | orchestrator | Set WAL devices config data --------------------------------------------- 0.59s 2025-05-26 03:46:32.944930 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-05-26 03:46:32.945281 | orchestrator | Print DB devices -------------------------------------------------------- 0.56s 2025-05-26 03:46:32.945581 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-05-26 03:46:32.946096 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-05-26 03:46:45.238710 | orchestrator | Registering Redlock._acquired_script 2025-05-26 03:46:45.238788 | orchestrator | Registering Redlock._extend_script 2025-05-26 03:46:45.238794 | orchestrator | Registering Redlock._release_script 2025-05-26 03:46:45.295370 | orchestrator | 2025-05-26 03:46:45 | INFO  | Task d5e1200f-7216-48be-a418-3684257763f5 (sync inventory) is running in background. Output coming soon. 2025-05-26 04:46:47.939832 | orchestrator | 2025-05-26 04:46:47 | INFO  | Task ce704d2e-d421-415a-8164-9491f1483784 (ceph-create-lvm-devices) was prepared for execution. 2025-05-26 04:46:47.939979 | orchestrator | 2025-05-26 04:46:47 | INFO  | It takes a moment until task ce704d2e-d421-415a-8164-9491f1483784 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-26 04:46:52.092230 | orchestrator | 2025-05-26 04:46:52.094930 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-26 04:46:52.094959 | orchestrator | 2025-05-26 04:46:52.094968 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-26 04:46:52.094977 | orchestrator | Monday 26 May 2025 04:46:52 +0000 (0:00:00.302) 0:00:00.302 ************ 2025-05-26 04:46:52.307346 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-26 04:46:52.307679 | orchestrator | 2025-05-26 04:46:52.308068 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-26 04:46:52.309367 | orchestrator | Monday 26 May 2025 04:46:52 +0000 (0:00:00.225) 0:00:00.527 ************ 2025-05-26 04:46:52.525093 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:46:52.526844 | orchestrator | 2025-05-26 04:46:52.527964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:52.528944 | orchestrator | Monday 26 May 2025 04:46:52 +0000 (0:00:00.217) 0:00:00.744 ************ 2025-05-26 04:46:52.894781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-26 04:46:52.895194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-26 04:46:52.896492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-26 04:46:52.897169 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-26 04:46:52.897617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-26 04:46:52.898654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-26 04:46:52.899072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-26 04:46:52.899710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-26 04:46:52.900489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-26 04:46:52.901186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-26 04:46:52.901664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-26 04:46:52.902404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-26 04:46:52.902786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-26 04:46:52.903275 | orchestrator | 2025-05-26 04:46:52.904574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:52.904876 | orchestrator | Monday 26 May 2025 04:46:52 +0000 (0:00:00.369) 0:00:01.113 ************ 2025-05-26 04:46:53.322092 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:53.322492 | orchestrator | 2025-05-26 04:46:53.323471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:53.324155 | orchestrator | Monday 26 May 2025 04:46:53 +0000 (0:00:00.427) 0:00:01.541 ************ 2025-05-26 04:46:53.509540 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:53.510459 | orchestrator | 2025-05-26 04:46:53.511280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:53.512297 | orchestrator | Monday 26 May 2025 04:46:53 +0000 (0:00:00.188) 0:00:01.729 ************ 2025-05-26 04:46:53.699186 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:53.699926 | orchestrator | 2025-05-26 04:46:53.700985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:53.701305 | orchestrator | Monday 26 May 2025 04:46:53 +0000 (0:00:00.189) 0:00:01.919 ************ 2025-05-26 04:46:53.884713 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:53.885215 | orchestrator | 2025-05-26 04:46:53.886196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:53.887244 | orchestrator | Monday 26 May 2025 04:46:53 +0000 (0:00:00.183) 0:00:02.103 ************ 2025-05-26 04:46:54.072392 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:54.072629 | orchestrator | 2025-05-26 04:46:54.074259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:54.074770 | orchestrator | Monday 26 May 2025 04:46:54 +0000 (0:00:00.186) 0:00:02.290 ************ 2025-05-26 04:46:54.259424 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:54.259759 | orchestrator | 2025-05-26 04:46:54.261084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:54.261739 | orchestrator | Monday 26 May 2025 04:46:54 +0000 (0:00:00.189) 0:00:02.479 ************ 2025-05-26 04:46:54.448277 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:54.449349 | orchestrator | 2025-05-26 04:46:54.449938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:54.451056 | orchestrator | Monday 26 May 2025 04:46:54 +0000 (0:00:00.188) 0:00:02.668 ************ 2025-05-26 04:46:54.640100 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:54.640587 | orchestrator | 2025-05-26 04:46:54.641203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:54.642165 | orchestrator | Monday 26 May 2025 04:46:54 +0000 (0:00:00.191) 0:00:02.860 ************ 2025-05-26 04:46:55.014705 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d4bdcb5-a0b8-4173-af9e-b961e366e943) 2025-05-26 04:46:55.015426 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d4bdcb5-a0b8-4173-af9e-b961e366e943) 2025-05-26 04:46:55.016859 | orchestrator | 2025-05-26 04:46:55.018142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:55.019393 | orchestrator | Monday 26 May 2025 04:46:55 +0000 (0:00:00.372) 0:00:03.232 ************ 2025-05-26 04:46:55.430348 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_feee3a86-288f-4310-9e74-72f077da2d2c) 2025-05-26 04:46:55.430475 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_feee3a86-288f-4310-9e74-72f077da2d2c) 2025-05-26 04:46:55.430491 | orchestrator | 2025-05-26 04:46:55.430797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:55.430870 | orchestrator | Monday 26 May 2025 04:46:55 +0000 (0:00:00.417) 0:00:03.650 ************ 2025-05-26 04:46:56.079018 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d6e6216b-cbe0-4182-a9d6-b0841cd13c95) 2025-05-26 04:46:56.079930 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d6e6216b-cbe0-4182-a9d6-b0841cd13c95) 2025-05-26 04:46:56.081051 | orchestrator | 2025-05-26 04:46:56.081898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:56.082549 | orchestrator | Monday 26 May 2025 04:46:56 +0000 (0:00:00.646) 0:00:04.297 ************ 2025-05-26 04:46:56.962166 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c087d35e-df49-49d8-817c-07623fd598fd) 2025-05-26 04:46:56.962654 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c087d35e-df49-49d8-817c-07623fd598fd) 2025-05-26 04:46:56.964156 | orchestrator | 2025-05-26 04:46:56.965142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:46:56.966422 | orchestrator | Monday 26 May 2025 04:46:56 +0000 (0:00:00.884) 0:00:05.181 ************ 2025-05-26 04:46:57.293202 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-26 04:46:57.293367 | orchestrator | 2025-05-26 04:46:57.294093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:57.295060 | orchestrator | Monday 26 May 2025 04:46:57 +0000 (0:00:00.332) 0:00:05.513 ************ 2025-05-26 04:46:57.683668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-26 04:46:57.683884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-26 04:46:57.685356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-26 04:46:57.686090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-26 04:46:57.687813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-26 04:46:57.688167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-26 04:46:57.689291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-26 04:46:57.690550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-26 04:46:57.691136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-26 04:46:57.691605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-26 04:46:57.692241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-26 04:46:57.692781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-26 04:46:57.693807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-26 04:46:57.694241 | orchestrator | 2025-05-26 04:46:57.694844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:57.694934 | orchestrator | Monday 26 May 2025 04:46:57 +0000 (0:00:00.389) 0:00:05.903 ************ 2025-05-26 04:46:57.887901 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:57.888221 | orchestrator | 2025-05-26 04:46:57.889236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:57.890396 | orchestrator | Monday 26 May 2025 04:46:57 +0000 (0:00:00.203) 0:00:06.106 ************ 2025-05-26 04:46:58.098601 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:58.099194 | orchestrator | 2025-05-26 04:46:58.099227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:58.100391 | orchestrator | Monday 26 May 2025 04:46:58 +0000 (0:00:00.209) 0:00:06.316 ************ 2025-05-26 04:46:58.285416 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:58.286738 | orchestrator | 2025-05-26 04:46:58.287394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:58.288244 | orchestrator | Monday 26 May 2025 04:46:58 +0000 (0:00:00.189) 0:00:06.506 ************ 2025-05-26 04:46:58.494888 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:58.495251 | orchestrator | 2025-05-26 04:46:58.496224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:58.497061 | orchestrator | Monday 26 May 2025 04:46:58 +0000 (0:00:00.209) 0:00:06.715 ************ 2025-05-26 04:46:58.677039 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:58.677457 | orchestrator | 2025-05-26 04:46:58.678335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:58.678985 | orchestrator | Monday 26 May 2025 04:46:58 +0000 (0:00:00.181) 0:00:06.897 ************ 2025-05-26 04:46:58.872977 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:58.873180 | orchestrator | 2025-05-26 04:46:58.873631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:58.874493 | orchestrator | Monday 26 May 2025 04:46:58 +0000 (0:00:00.195) 0:00:07.093 ************ 2025-05-26 04:46:59.097964 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:59.098477 | orchestrator | 2025-05-26 04:46:59.099394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:59.100179 | orchestrator | Monday 26 May 2025 04:46:59 +0000 (0:00:00.224) 0:00:07.317 ************ 2025-05-26 04:46:59.281979 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:46:59.282273 | orchestrator | 2025-05-26 04:46:59.282805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:46:59.283457 | orchestrator | Monday 26 May 2025 04:46:59 +0000 (0:00:00.184) 0:00:07.501 ************ 2025-05-26 04:47:00.296558 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-26 04:47:00.297442 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-26 04:47:00.298929 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-26 04:47:00.300301 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-26 04:47:00.301095 | orchestrator | 2025-05-26 04:47:00.302355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:00.303719 | orchestrator | Monday 26 May 2025 04:47:00 +0000 (0:00:01.013) 0:00:08.515 ************ 2025-05-26 04:47:00.489204 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:00.489651 | orchestrator | 2025-05-26 04:47:00.492762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:00.494006 | orchestrator | Monday 26 May 2025 04:47:00 +0000 (0:00:00.193) 0:00:08.708 ************ 2025-05-26 04:47:00.678497 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:00.679636 | orchestrator | 2025-05-26 04:47:00.680279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:00.681623 | orchestrator | Monday 26 May 2025 04:47:00 +0000 (0:00:00.189) 0:00:08.898 ************ 2025-05-26 04:47:00.887203 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:00.887734 | orchestrator | 2025-05-26 04:47:00.888625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:00.890853 | orchestrator | Monday 26 May 2025 04:47:00 +0000 (0:00:00.208) 0:00:09.106 ************ 2025-05-26 04:47:01.082080 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:01.083796 | orchestrator | 2025-05-26 04:47:01.084111 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-26 04:47:01.087918 | orchestrator | Monday 26 May 2025 04:47:01 +0000 (0:00:00.192) 0:00:09.299 ************ 2025-05-26 04:47:01.213797 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:01.214842 | orchestrator | 2025-05-26 04:47:01.216295 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-26 04:47:01.217278 | orchestrator | Monday 26 May 2025 04:47:01 +0000 (0:00:00.134) 0:00:09.434 ************ 2025-05-26 04:47:01.408123 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '308d2e7c-9a7f-5d4d-8709-bdc410450a80'}}) 2025-05-26 04:47:01.409222 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}}) 2025-05-26 04:47:01.409662 | orchestrator | 2025-05-26 04:47:01.410865 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-26 04:47:01.411428 | orchestrator | Monday 26 May 2025 04:47:01 +0000 (0:00:00.192) 0:00:09.626 ************ 2025-05-26 04:47:03.663724 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'}) 2025-05-26 04:47:03.663857 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}) 2025-05-26 04:47:03.663873 | orchestrator | 2025-05-26 04:47:03.663888 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-26 04:47:03.663963 | orchestrator | Monday 26 May 2025 04:47:03 +0000 (0:00:02.252) 0:00:11.879 ************ 2025-05-26 04:47:03.806907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:03.808393 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:03.808424 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:03.809254 | orchestrator | 2025-05-26 04:47:03.809885 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-26 04:47:03.810677 | orchestrator | Monday 26 May 2025 04:47:03 +0000 (0:00:00.146) 0:00:12.025 ************ 2025-05-26 04:47:05.271870 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'}) 2025-05-26 04:47:05.272033 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}) 2025-05-26 04:47:05.272083 | orchestrator | 2025-05-26 04:47:05.272137 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-26 04:47:05.272626 | orchestrator | Monday 26 May 2025 04:47:05 +0000 (0:00:01.465) 0:00:13.490 ************ 2025-05-26 04:47:05.413465 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:05.413629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:05.413646 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:05.413659 | orchestrator | 2025-05-26 04:47:05.413671 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-26 04:47:05.413684 | orchestrator | Monday 26 May 2025 04:47:05 +0000 (0:00:00.140) 0:00:13.631 ************ 2025-05-26 04:47:05.558893 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:05.560398 | orchestrator | 2025-05-26 04:47:05.561528 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-26 04:47:05.563043 | orchestrator | Monday 26 May 2025 04:47:05 +0000 (0:00:00.147) 0:00:13.778 ************ 2025-05-26 04:47:05.898608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:05.899368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:05.900573 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:05.901619 | orchestrator | 2025-05-26 04:47:05.902785 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-26 04:47:05.903637 | orchestrator | Monday 26 May 2025 04:47:05 +0000 (0:00:00.340) 0:00:14.118 ************ 2025-05-26 04:47:06.048006 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:06.049947 | orchestrator | 2025-05-26 04:47:06.053589 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-26 04:47:06.053619 | orchestrator | Monday 26 May 2025 04:47:06 +0000 (0:00:00.149) 0:00:14.267 ************ 2025-05-26 04:47:06.208178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:06.209357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:06.211001 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:06.211263 | orchestrator | 2025-05-26 04:47:06.212846 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-26 04:47:06.213426 | orchestrator | Monday 26 May 2025 04:47:06 +0000 (0:00:00.159) 0:00:14.427 ************ 2025-05-26 04:47:06.350614 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:06.351204 | orchestrator | 2025-05-26 04:47:06.352499 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-26 04:47:06.353252 | orchestrator | Monday 26 May 2025 04:47:06 +0000 (0:00:00.141) 0:00:14.569 ************ 2025-05-26 04:47:06.514258 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:06.515414 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:06.517350 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:06.518823 | orchestrator | 2025-05-26 04:47:06.520301 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-26 04:47:06.520403 | orchestrator | Monday 26 May 2025 04:47:06 +0000 (0:00:00.164) 0:00:14.733 ************ 2025-05-26 04:47:06.651134 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:06.652960 | orchestrator | 2025-05-26 04:47:06.653461 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-26 04:47:06.657098 | orchestrator | Monday 26 May 2025 04:47:06 +0000 (0:00:00.137) 0:00:14.871 ************ 2025-05-26 04:47:06.812962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:06.813432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:06.817210 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:06.818236 | orchestrator | 2025-05-26 04:47:06.819438 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-26 04:47:06.822443 | orchestrator | Monday 26 May 2025 04:47:06 +0000 (0:00:00.159) 0:00:15.030 ************ 2025-05-26 04:47:06.986366 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:06.986654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:06.986682 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:06.987864 | orchestrator | 2025-05-26 04:47:06.988761 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-26 04:47:06.989728 | orchestrator | Monday 26 May 2025 04:47:06 +0000 (0:00:00.175) 0:00:15.206 ************ 2025-05-26 04:47:07.136058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:07.136679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:07.137219 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:07.137921 | orchestrator | 2025-05-26 04:47:07.138398 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-26 04:47:07.138914 | orchestrator | Monday 26 May 2025 04:47:07 +0000 (0:00:00.150) 0:00:15.356 ************ 2025-05-26 04:47:07.274162 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:07.275138 | orchestrator | 2025-05-26 04:47:07.276294 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-26 04:47:07.279357 | orchestrator | Monday 26 May 2025 04:47:07 +0000 (0:00:00.137) 0:00:15.494 ************ 2025-05-26 04:47:07.399371 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:07.402390 | orchestrator | 2025-05-26 04:47:07.402464 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-26 04:47:07.402625 | orchestrator | Monday 26 May 2025 04:47:07 +0000 (0:00:00.124) 0:00:15.618 ************ 2025-05-26 04:47:07.520381 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:07.521313 | orchestrator | 2025-05-26 04:47:07.524192 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-26 04:47:07.524235 | orchestrator | Monday 26 May 2025 04:47:07 +0000 (0:00:00.122) 0:00:15.741 ************ 2025-05-26 04:47:07.769239 | orchestrator | ok: [testbed-node-3] => { 2025-05-26 04:47:07.769690 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-26 04:47:07.773438 | orchestrator | } 2025-05-26 04:47:07.773471 | orchestrator | 2025-05-26 04:47:07.773485 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-26 04:47:07.773500 | orchestrator | Monday 26 May 2025 04:47:07 +0000 (0:00:00.248) 0:00:15.989 ************ 2025-05-26 04:47:07.883330 | orchestrator | ok: [testbed-node-3] => { 2025-05-26 04:47:07.883892 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-26 04:47:07.885034 | orchestrator | } 2025-05-26 04:47:07.886805 | orchestrator | 2025-05-26 04:47:07.886844 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-26 04:47:07.887091 | orchestrator | Monday 26 May 2025 04:47:07 +0000 (0:00:00.114) 0:00:16.104 ************ 2025-05-26 04:47:08.002312 | orchestrator | ok: [testbed-node-3] => { 2025-05-26 04:47:08.002442 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-26 04:47:08.002733 | orchestrator | } 2025-05-26 04:47:08.003039 | orchestrator | 2025-05-26 04:47:08.003478 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-26 04:47:08.003528 | orchestrator | Monday 26 May 2025 04:47:07 +0000 (0:00:00.119) 0:00:16.223 ************ 2025-05-26 04:47:08.623400 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:08.623555 | orchestrator | 2025-05-26 04:47:08.623575 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-26 04:47:08.623588 | orchestrator | Monday 26 May 2025 04:47:08 +0000 (0:00:00.616) 0:00:16.840 ************ 2025-05-26 04:47:09.118731 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:09.118874 | orchestrator | 2025-05-26 04:47:09.119370 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-26 04:47:09.123069 | orchestrator | Monday 26 May 2025 04:47:09 +0000 (0:00:00.497) 0:00:17.337 ************ 2025-05-26 04:47:09.622374 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:09.622486 | orchestrator | 2025-05-26 04:47:09.622546 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-26 04:47:09.622622 | orchestrator | Monday 26 May 2025 04:47:09 +0000 (0:00:00.500) 0:00:17.838 ************ 2025-05-26 04:47:09.745738 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:09.747638 | orchestrator | 2025-05-26 04:47:09.751325 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-26 04:47:09.751617 | orchestrator | Monday 26 May 2025 04:47:09 +0000 (0:00:00.128) 0:00:17.967 ************ 2025-05-26 04:47:09.833545 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:09.834742 | orchestrator | 2025-05-26 04:47:09.836667 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-26 04:47:09.837695 | orchestrator | Monday 26 May 2025 04:47:09 +0000 (0:00:00.086) 0:00:18.054 ************ 2025-05-26 04:47:09.936600 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:09.940328 | orchestrator | 2025-05-26 04:47:09.940401 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-26 04:47:09.940428 | orchestrator | Monday 26 May 2025 04:47:09 +0000 (0:00:00.102) 0:00:18.157 ************ 2025-05-26 04:47:10.067177 | orchestrator | ok: [testbed-node-3] => { 2025-05-26 04:47:10.071484 | orchestrator |  "vgs_report": { 2025-05-26 04:47:10.071613 | orchestrator |  "vg": [] 2025-05-26 04:47:10.071681 | orchestrator |  } 2025-05-26 04:47:10.072688 | orchestrator | } 2025-05-26 04:47:10.073815 | orchestrator | 2025-05-26 04:47:10.074556 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-26 04:47:10.075250 | orchestrator | Monday 26 May 2025 04:47:10 +0000 (0:00:00.129) 0:00:18.287 ************ 2025-05-26 04:47:10.197363 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:10.201195 | orchestrator | 2025-05-26 04:47:10.201379 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-26 04:47:10.201401 | orchestrator | Monday 26 May 2025 04:47:10 +0000 (0:00:00.131) 0:00:18.418 ************ 2025-05-26 04:47:10.320934 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:10.327184 | orchestrator | 2025-05-26 04:47:10.327216 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-26 04:47:10.327230 | orchestrator | Monday 26 May 2025 04:47:10 +0000 (0:00:00.122) 0:00:18.541 ************ 2025-05-26 04:47:10.586180 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:10.586980 | orchestrator | 2025-05-26 04:47:10.589451 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-26 04:47:10.589498 | orchestrator | Monday 26 May 2025 04:47:10 +0000 (0:00:00.264) 0:00:18.805 ************ 2025-05-26 04:47:10.717336 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:10.718578 | orchestrator | 2025-05-26 04:47:10.721189 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-26 04:47:10.721214 | orchestrator | Monday 26 May 2025 04:47:10 +0000 (0:00:00.132) 0:00:18.938 ************ 2025-05-26 04:47:10.849325 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:10.849449 | orchestrator | 2025-05-26 04:47:10.849567 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-26 04:47:10.849678 | orchestrator | Monday 26 May 2025 04:47:10 +0000 (0:00:00.129) 0:00:19.067 ************ 2025-05-26 04:47:10.969729 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:10.969831 | orchestrator | 2025-05-26 04:47:10.970188 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-26 04:47:10.971396 | orchestrator | Monday 26 May 2025 04:47:10 +0000 (0:00:00.120) 0:00:19.188 ************ 2025-05-26 04:47:11.094646 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.095706 | orchestrator | 2025-05-26 04:47:11.096775 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-26 04:47:11.097860 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.126) 0:00:19.315 ************ 2025-05-26 04:47:11.214105 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.214881 | orchestrator | 2025-05-26 04:47:11.218393 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-26 04:47:11.218441 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.119) 0:00:19.435 ************ 2025-05-26 04:47:11.332178 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.332838 | orchestrator | 2025-05-26 04:47:11.336324 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-26 04:47:11.336375 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.118) 0:00:19.553 ************ 2025-05-26 04:47:11.443608 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.444668 | orchestrator | 2025-05-26 04:47:11.445676 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-26 04:47:11.446621 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.110) 0:00:19.663 ************ 2025-05-26 04:47:11.550478 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.550611 | orchestrator | 2025-05-26 04:47:11.551541 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-26 04:47:11.551917 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.108) 0:00:19.771 ************ 2025-05-26 04:47:11.669231 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.670587 | orchestrator | 2025-05-26 04:47:11.670626 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-26 04:47:11.670640 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.117) 0:00:19.889 ************ 2025-05-26 04:47:11.802789 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.803236 | orchestrator | 2025-05-26 04:47:11.806320 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-26 04:47:11.806400 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.134) 0:00:20.023 ************ 2025-05-26 04:47:11.922160 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:11.922605 | orchestrator | 2025-05-26 04:47:11.926208 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-26 04:47:11.926260 | orchestrator | Monday 26 May 2025 04:47:11 +0000 (0:00:00.119) 0:00:20.143 ************ 2025-05-26 04:47:12.178759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:12.178967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:12.179577 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:12.184129 | orchestrator | 2025-05-26 04:47:12.184197 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-26 04:47:12.184213 | orchestrator | Monday 26 May 2025 04:47:12 +0000 (0:00:00.256) 0:00:20.399 ************ 2025-05-26 04:47:12.322287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:12.322961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:12.326464 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:12.327196 | orchestrator | 2025-05-26 04:47:12.327781 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-26 04:47:12.328891 | orchestrator | Monday 26 May 2025 04:47:12 +0000 (0:00:00.143) 0:00:20.543 ************ 2025-05-26 04:47:12.457227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:12.458543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:12.458689 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:12.459338 | orchestrator | 2025-05-26 04:47:12.460828 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-26 04:47:12.462143 | orchestrator | Monday 26 May 2025 04:47:12 +0000 (0:00:00.133) 0:00:20.677 ************ 2025-05-26 04:47:12.603827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:12.604012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:12.604709 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:12.606389 | orchestrator | 2025-05-26 04:47:12.606415 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-26 04:47:12.606429 | orchestrator | Monday 26 May 2025 04:47:12 +0000 (0:00:00.145) 0:00:20.823 ************ 2025-05-26 04:47:12.761457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:12.761592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:12.761607 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:12.761619 | orchestrator | 2025-05-26 04:47:12.761731 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-26 04:47:12.761822 | orchestrator | Monday 26 May 2025 04:47:12 +0000 (0:00:00.157) 0:00:20.980 ************ 2025-05-26 04:47:12.897146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:12.897416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:12.897962 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:12.899276 | orchestrator | 2025-05-26 04:47:12.899813 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-26 04:47:12.903015 | orchestrator | Monday 26 May 2025 04:47:12 +0000 (0:00:00.136) 0:00:21.117 ************ 2025-05-26 04:47:13.043704 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:13.043935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:13.044688 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:13.045584 | orchestrator | 2025-05-26 04:47:13.046267 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-26 04:47:13.046787 | orchestrator | Monday 26 May 2025 04:47:13 +0000 (0:00:00.146) 0:00:21.264 ************ 2025-05-26 04:47:13.188495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:13.188793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:13.189771 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:13.190326 | orchestrator | 2025-05-26 04:47:13.191245 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-26 04:47:13.194287 | orchestrator | Monday 26 May 2025 04:47:13 +0000 (0:00:00.145) 0:00:21.409 ************ 2025-05-26 04:47:13.700837 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:13.702237 | orchestrator | 2025-05-26 04:47:13.702617 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-26 04:47:13.703759 | orchestrator | Monday 26 May 2025 04:47:13 +0000 (0:00:00.509) 0:00:21.919 ************ 2025-05-26 04:47:14.193350 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:14.193470 | orchestrator | 2025-05-26 04:47:14.195137 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-26 04:47:14.195499 | orchestrator | Monday 26 May 2025 04:47:14 +0000 (0:00:00.494) 0:00:22.413 ************ 2025-05-26 04:47:14.379006 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:47:14.379117 | orchestrator | 2025-05-26 04:47:14.380399 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-26 04:47:14.381545 | orchestrator | Monday 26 May 2025 04:47:14 +0000 (0:00:00.184) 0:00:22.598 ************ 2025-05-26 04:47:14.544763 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'vg_name': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'}) 2025-05-26 04:47:14.545600 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'vg_name': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'}) 2025-05-26 04:47:14.549281 | orchestrator | 2025-05-26 04:47:14.549329 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-26 04:47:14.549344 | orchestrator | Monday 26 May 2025 04:47:14 +0000 (0:00:00.164) 0:00:22.763 ************ 2025-05-26 04:47:14.892911 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:14.893647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:14.894881 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:14.898125 | orchestrator | 2025-05-26 04:47:14.898147 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-26 04:47:14.898162 | orchestrator | Monday 26 May 2025 04:47:14 +0000 (0:00:00.349) 0:00:23.112 ************ 2025-05-26 04:47:15.043139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:15.044031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:15.045647 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:15.049398 | orchestrator | 2025-05-26 04:47:15.049438 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-26 04:47:15.050153 | orchestrator | Monday 26 May 2025 04:47:15 +0000 (0:00:00.150) 0:00:23.263 ************ 2025-05-26 04:47:15.204019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80', 'data_vg': 'ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80'})  2025-05-26 04:47:15.205362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560', 'data_vg': 'ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560'})  2025-05-26 04:47:15.206427 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:47:15.212176 | orchestrator | 2025-05-26 04:47:15.212215 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-26 04:47:15.212257 | orchestrator | Monday 26 May 2025 04:47:15 +0000 (0:00:00.160) 0:00:23.423 ************ 2025-05-26 04:47:15.494624 | orchestrator | ok: [testbed-node-3] => { 2025-05-26 04:47:15.494727 | orchestrator |  "lvm_report": { 2025-05-26 04:47:15.495264 | orchestrator |  "lv": [ 2025-05-26 04:47:15.497432 | orchestrator |  { 2025-05-26 04:47:15.497463 | orchestrator |  "lv_name": "osd-block-308d2e7c-9a7f-5d4d-8709-bdc410450a80", 2025-05-26 04:47:15.498606 | orchestrator |  "vg_name": "ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80" 2025-05-26 04:47:15.499836 | orchestrator |  }, 2025-05-26 04:47:15.500064 | orchestrator |  { 2025-05-26 04:47:15.500944 | orchestrator |  "lv_name": "osd-block-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560", 2025-05-26 04:47:15.501667 | orchestrator |  "vg_name": "ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560" 2025-05-26 04:47:15.502933 | orchestrator |  } 2025-05-26 04:47:15.503130 | orchestrator |  ], 2025-05-26 04:47:15.504009 | orchestrator |  "pv": [ 2025-05-26 04:47:15.504715 | orchestrator |  { 2025-05-26 04:47:15.506321 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-26 04:47:15.506346 | orchestrator |  "vg_name": "ceph-308d2e7c-9a7f-5d4d-8709-bdc410450a80" 2025-05-26 04:47:15.506846 | orchestrator |  }, 2025-05-26 04:47:15.507042 | orchestrator |  { 2025-05-26 04:47:15.508071 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-26 04:47:15.508663 | orchestrator |  "vg_name": "ceph-5dc1dea4-54cd-5a78-85ff-70cfe3c9c560" 2025-05-26 04:47:15.509314 | orchestrator |  } 2025-05-26 04:47:15.509675 | orchestrator |  ] 2025-05-26 04:47:15.510181 | orchestrator |  } 2025-05-26 04:47:15.510814 | orchestrator | } 2025-05-26 04:47:15.512408 | orchestrator | 2025-05-26 04:47:15.512437 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-26 04:47:15.512449 | orchestrator | 2025-05-26 04:47:15.514436 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-26 04:47:15.514611 | orchestrator | Monday 26 May 2025 04:47:15 +0000 (0:00:00.288) 0:00:23.712 ************ 2025-05-26 04:47:15.733798 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-26 04:47:15.735271 | orchestrator | 2025-05-26 04:47:15.736673 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-26 04:47:15.739570 | orchestrator | Monday 26 May 2025 04:47:15 +0000 (0:00:00.239) 0:00:23.952 ************ 2025-05-26 04:47:15.971920 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:15.974276 | orchestrator | 2025-05-26 04:47:15.978402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:15.980366 | orchestrator | Monday 26 May 2025 04:47:15 +0000 (0:00:00.239) 0:00:24.191 ************ 2025-05-26 04:47:16.378612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-26 04:47:16.380366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-26 04:47:16.382456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-26 04:47:16.383986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-26 04:47:16.386261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-26 04:47:16.387449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-26 04:47:16.388304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-26 04:47:16.389647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-26 04:47:16.390244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-26 04:47:16.391656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-26 04:47:16.392647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-26 04:47:16.393879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-26 04:47:16.394767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-26 04:47:16.395406 | orchestrator | 2025-05-26 04:47:16.396270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:16.397194 | orchestrator | Monday 26 May 2025 04:47:16 +0000 (0:00:00.405) 0:00:24.597 ************ 2025-05-26 04:47:16.566761 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:16.567270 | orchestrator | 2025-05-26 04:47:16.567644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:16.568055 | orchestrator | Monday 26 May 2025 04:47:16 +0000 (0:00:00.190) 0:00:24.787 ************ 2025-05-26 04:47:16.743102 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:16.743214 | orchestrator | 2025-05-26 04:47:16.743938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:16.744359 | orchestrator | Monday 26 May 2025 04:47:16 +0000 (0:00:00.175) 0:00:24.963 ************ 2025-05-26 04:47:17.182149 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:17.182349 | orchestrator | 2025-05-26 04:47:17.186329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:17.186390 | orchestrator | Monday 26 May 2025 04:47:17 +0000 (0:00:00.438) 0:00:25.402 ************ 2025-05-26 04:47:17.359860 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:17.360420 | orchestrator | 2025-05-26 04:47:17.361661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:17.367164 | orchestrator | Monday 26 May 2025 04:47:17 +0000 (0:00:00.177) 0:00:25.580 ************ 2025-05-26 04:47:17.526296 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:17.527221 | orchestrator | 2025-05-26 04:47:17.530159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:17.531816 | orchestrator | Monday 26 May 2025 04:47:17 +0000 (0:00:00.166) 0:00:25.746 ************ 2025-05-26 04:47:17.712173 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:17.712279 | orchestrator | 2025-05-26 04:47:17.712998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:17.713854 | orchestrator | Monday 26 May 2025 04:47:17 +0000 (0:00:00.182) 0:00:25.929 ************ 2025-05-26 04:47:17.892049 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:17.892144 | orchestrator | 2025-05-26 04:47:17.896292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:17.897608 | orchestrator | Monday 26 May 2025 04:47:17 +0000 (0:00:00.183) 0:00:26.112 ************ 2025-05-26 04:47:18.085103 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:18.088907 | orchestrator | 2025-05-26 04:47:18.089484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:18.089632 | orchestrator | Monday 26 May 2025 04:47:18 +0000 (0:00:00.193) 0:00:26.306 ************ 2025-05-26 04:47:18.483251 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_abaa748c-97b3-4e70-8935-2e6927d8d198) 2025-05-26 04:47:18.483377 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_abaa748c-97b3-4e70-8935-2e6927d8d198) 2025-05-26 04:47:18.483443 | orchestrator | 2025-05-26 04:47:18.484361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:18.485483 | orchestrator | Monday 26 May 2025 04:47:18 +0000 (0:00:00.395) 0:00:26.702 ************ 2025-05-26 04:47:18.870788 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a2c1486d-cd17-4e79-bfde-447100a0feef) 2025-05-26 04:47:18.870890 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a2c1486d-cd17-4e79-bfde-447100a0feef) 2025-05-26 04:47:18.870956 | orchestrator | 2025-05-26 04:47:18.871429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:18.871453 | orchestrator | Monday 26 May 2025 04:47:18 +0000 (0:00:00.386) 0:00:27.089 ************ 2025-05-26 04:47:19.245917 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b8fa87d6-4bbf-4e23-9059-3efb42beefcf) 2025-05-26 04:47:19.246090 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b8fa87d6-4bbf-4e23-9059-3efb42beefcf) 2025-05-26 04:47:19.246107 | orchestrator | 2025-05-26 04:47:19.246989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:19.247078 | orchestrator | Monday 26 May 2025 04:47:19 +0000 (0:00:00.373) 0:00:27.462 ************ 2025-05-26 04:47:19.641372 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2a6da8ab-439b-4c92-86f2-b8912a630d10) 2025-05-26 04:47:19.645459 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2a6da8ab-439b-4c92-86f2-b8912a630d10) 2025-05-26 04:47:19.645553 | orchestrator | 2025-05-26 04:47:19.646290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:19.646727 | orchestrator | Monday 26 May 2025 04:47:19 +0000 (0:00:00.399) 0:00:27.861 ************ 2025-05-26 04:47:19.935785 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-26 04:47:19.935894 | orchestrator | 2025-05-26 04:47:19.936004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:19.936383 | orchestrator | Monday 26 May 2025 04:47:19 +0000 (0:00:00.295) 0:00:28.157 ************ 2025-05-26 04:47:20.439080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-26 04:47:20.439963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-26 04:47:20.441187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-26 04:47:20.442243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-26 04:47:20.443185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-26 04:47:20.444115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-26 04:47:20.445000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-26 04:47:20.445676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-26 04:47:20.446493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-26 04:47:20.447107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-26 04:47:20.448054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-26 04:47:20.448401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-26 04:47:20.449269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-26 04:47:20.449578 | orchestrator | 2025-05-26 04:47:20.450143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:20.450684 | orchestrator | Monday 26 May 2025 04:47:20 +0000 (0:00:00.501) 0:00:28.658 ************ 2025-05-26 04:47:20.638870 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:20.638968 | orchestrator | 2025-05-26 04:47:20.638985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:20.639621 | orchestrator | Monday 26 May 2025 04:47:20 +0000 (0:00:00.197) 0:00:28.856 ************ 2025-05-26 04:47:20.791827 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:20.792477 | orchestrator | 2025-05-26 04:47:20.792538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:20.792555 | orchestrator | Monday 26 May 2025 04:47:20 +0000 (0:00:00.155) 0:00:29.012 ************ 2025-05-26 04:47:20.956038 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:20.956259 | orchestrator | 2025-05-26 04:47:20.957198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:20.958399 | orchestrator | Monday 26 May 2025 04:47:20 +0000 (0:00:00.164) 0:00:29.176 ************ 2025-05-26 04:47:21.134495 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:21.134730 | orchestrator | 2025-05-26 04:47:21.135744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:21.136390 | orchestrator | Monday 26 May 2025 04:47:21 +0000 (0:00:00.178) 0:00:29.355 ************ 2025-05-26 04:47:21.322770 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:21.323484 | orchestrator | 2025-05-26 04:47:21.325325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:21.325370 | orchestrator | Monday 26 May 2025 04:47:21 +0000 (0:00:00.188) 0:00:29.543 ************ 2025-05-26 04:47:21.514672 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:21.515091 | orchestrator | 2025-05-26 04:47:21.516285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:21.516569 | orchestrator | Monday 26 May 2025 04:47:21 +0000 (0:00:00.191) 0:00:29.735 ************ 2025-05-26 04:47:21.688642 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:21.688854 | orchestrator | 2025-05-26 04:47:21.688995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:21.689581 | orchestrator | Monday 26 May 2025 04:47:21 +0000 (0:00:00.173) 0:00:29.909 ************ 2025-05-26 04:47:21.898951 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:21.899397 | orchestrator | 2025-05-26 04:47:21.900173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:21.901015 | orchestrator | Monday 26 May 2025 04:47:21 +0000 (0:00:00.210) 0:00:30.119 ************ 2025-05-26 04:47:22.730109 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-26 04:47:22.730405 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-26 04:47:22.732337 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-26 04:47:22.732376 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-26 04:47:22.732797 | orchestrator | 2025-05-26 04:47:22.733119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:22.733549 | orchestrator | Monday 26 May 2025 04:47:22 +0000 (0:00:00.829) 0:00:30.948 ************ 2025-05-26 04:47:22.926596 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:22.927084 | orchestrator | 2025-05-26 04:47:22.927700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:22.929094 | orchestrator | Monday 26 May 2025 04:47:22 +0000 (0:00:00.199) 0:00:31.147 ************ 2025-05-26 04:47:23.099939 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:23.100537 | orchestrator | 2025-05-26 04:47:23.101534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:23.102535 | orchestrator | Monday 26 May 2025 04:47:23 +0000 (0:00:00.172) 0:00:31.319 ************ 2025-05-26 04:47:23.689364 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:23.689858 | orchestrator | 2025-05-26 04:47:23.690933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:23.692029 | orchestrator | Monday 26 May 2025 04:47:23 +0000 (0:00:00.588) 0:00:31.908 ************ 2025-05-26 04:47:23.890115 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:23.890458 | orchestrator | 2025-05-26 04:47:23.891934 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-26 04:47:23.893313 | orchestrator | Monday 26 May 2025 04:47:23 +0000 (0:00:00.202) 0:00:32.110 ************ 2025-05-26 04:47:24.019484 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:24.020472 | orchestrator | 2025-05-26 04:47:24.021542 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-26 04:47:24.022308 | orchestrator | Monday 26 May 2025 04:47:24 +0000 (0:00:00.128) 0:00:32.239 ************ 2025-05-26 04:47:24.185203 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}}) 2025-05-26 04:47:24.185875 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4b512bc6-244a-59a0-9a87-47140e1f057d'}}) 2025-05-26 04:47:24.186824 | orchestrator | 2025-05-26 04:47:24.188006 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-26 04:47:24.189188 | orchestrator | Monday 26 May 2025 04:47:24 +0000 (0:00:00.165) 0:00:32.405 ************ 2025-05-26 04:47:26.193974 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}) 2025-05-26 04:47:26.194606 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'}) 2025-05-26 04:47:26.195438 | orchestrator | 2025-05-26 04:47:26.197656 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-26 04:47:26.198148 | orchestrator | Monday 26 May 2025 04:47:26 +0000 (0:00:02.007) 0:00:34.412 ************ 2025-05-26 04:47:26.341194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:26.341364 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:26.342364 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:26.343129 | orchestrator | 2025-05-26 04:47:26.343746 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-26 04:47:26.344811 | orchestrator | Monday 26 May 2025 04:47:26 +0000 (0:00:00.148) 0:00:34.561 ************ 2025-05-26 04:47:27.619992 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}) 2025-05-26 04:47:27.620131 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'}) 2025-05-26 04:47:27.620254 | orchestrator | 2025-05-26 04:47:27.621220 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-26 04:47:27.622626 | orchestrator | Monday 26 May 2025 04:47:27 +0000 (0:00:01.276) 0:00:35.838 ************ 2025-05-26 04:47:27.772616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:27.772834 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:27.773439 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:27.774426 | orchestrator | 2025-05-26 04:47:27.775127 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-26 04:47:27.775688 | orchestrator | Monday 26 May 2025 04:47:27 +0000 (0:00:00.154) 0:00:35.992 ************ 2025-05-26 04:47:27.914487 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:27.915495 | orchestrator | 2025-05-26 04:47:27.916358 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-26 04:47:27.917884 | orchestrator | Monday 26 May 2025 04:47:27 +0000 (0:00:00.140) 0:00:36.133 ************ 2025-05-26 04:47:28.064565 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:28.065273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:28.065923 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:28.066957 | orchestrator | 2025-05-26 04:47:28.067705 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-26 04:47:28.068361 | orchestrator | Monday 26 May 2025 04:47:28 +0000 (0:00:00.150) 0:00:36.283 ************ 2025-05-26 04:47:28.204705 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:28.204980 | orchestrator | 2025-05-26 04:47:28.208847 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-26 04:47:28.208879 | orchestrator | Monday 26 May 2025 04:47:28 +0000 (0:00:00.139) 0:00:36.422 ************ 2025-05-26 04:47:28.347693 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:28.348247 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:28.348725 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:28.349339 | orchestrator | 2025-05-26 04:47:28.351136 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-26 04:47:28.351165 | orchestrator | Monday 26 May 2025 04:47:28 +0000 (0:00:00.144) 0:00:36.567 ************ 2025-05-26 04:47:28.664847 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:28.665427 | orchestrator | 2025-05-26 04:47:28.666761 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-26 04:47:28.667754 | orchestrator | Monday 26 May 2025 04:47:28 +0000 (0:00:00.317) 0:00:36.885 ************ 2025-05-26 04:47:28.821173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:28.822393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:28.823652 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:28.824931 | orchestrator | 2025-05-26 04:47:28.825858 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-26 04:47:28.826258 | orchestrator | Monday 26 May 2025 04:47:28 +0000 (0:00:00.155) 0:00:37.040 ************ 2025-05-26 04:47:28.952011 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:28.952157 | orchestrator | 2025-05-26 04:47:28.953062 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-26 04:47:28.953958 | orchestrator | Monday 26 May 2025 04:47:28 +0000 (0:00:00.131) 0:00:37.172 ************ 2025-05-26 04:47:29.112914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:29.113061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:29.113210 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:29.113734 | orchestrator | 2025-05-26 04:47:29.114302 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-26 04:47:29.115904 | orchestrator | Monday 26 May 2025 04:47:29 +0000 (0:00:00.158) 0:00:37.331 ************ 2025-05-26 04:47:29.260298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:29.261150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:29.262134 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:29.263166 | orchestrator | 2025-05-26 04:47:29.264772 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-26 04:47:29.265104 | orchestrator | Monday 26 May 2025 04:47:29 +0000 (0:00:00.149) 0:00:37.481 ************ 2025-05-26 04:47:29.439923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:29.440133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:29.441151 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:29.441788 | orchestrator | 2025-05-26 04:47:29.443408 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-26 04:47:29.444046 | orchestrator | Monday 26 May 2025 04:47:29 +0000 (0:00:00.178) 0:00:37.659 ************ 2025-05-26 04:47:29.575622 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:29.575681 | orchestrator | 2025-05-26 04:47:29.576385 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-26 04:47:29.576987 | orchestrator | Monday 26 May 2025 04:47:29 +0000 (0:00:00.137) 0:00:37.796 ************ 2025-05-26 04:47:29.697907 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:29.698099 | orchestrator | 2025-05-26 04:47:29.699647 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-26 04:47:29.700374 | orchestrator | Monday 26 May 2025 04:47:29 +0000 (0:00:00.121) 0:00:37.918 ************ 2025-05-26 04:47:29.835987 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:29.836164 | orchestrator | 2025-05-26 04:47:29.837277 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-26 04:47:29.837978 | orchestrator | Monday 26 May 2025 04:47:29 +0000 (0:00:00.136) 0:00:38.054 ************ 2025-05-26 04:47:29.977267 | orchestrator | ok: [testbed-node-4] => { 2025-05-26 04:47:29.978862 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-26 04:47:29.978887 | orchestrator | } 2025-05-26 04:47:29.980431 | orchestrator | 2025-05-26 04:47:29.981728 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-26 04:47:29.982490 | orchestrator | Monday 26 May 2025 04:47:29 +0000 (0:00:00.142) 0:00:38.197 ************ 2025-05-26 04:47:30.119951 | orchestrator | ok: [testbed-node-4] => { 2025-05-26 04:47:30.120098 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-26 04:47:30.121633 | orchestrator | } 2025-05-26 04:47:30.123016 | orchestrator | 2025-05-26 04:47:30.123572 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-26 04:47:30.124936 | orchestrator | Monday 26 May 2025 04:47:30 +0000 (0:00:00.140) 0:00:38.338 ************ 2025-05-26 04:47:30.259160 | orchestrator | ok: [testbed-node-4] => { 2025-05-26 04:47:30.260071 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-26 04:47:30.260832 | orchestrator | } 2025-05-26 04:47:30.261876 | orchestrator | 2025-05-26 04:47:30.262739 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-26 04:47:30.263917 | orchestrator | Monday 26 May 2025 04:47:30 +0000 (0:00:00.141) 0:00:38.479 ************ 2025-05-26 04:47:30.947839 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:30.948069 | orchestrator | 2025-05-26 04:47:30.948543 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-26 04:47:30.950112 | orchestrator | Monday 26 May 2025 04:47:30 +0000 (0:00:00.687) 0:00:39.167 ************ 2025-05-26 04:47:31.445777 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:31.446231 | orchestrator | 2025-05-26 04:47:31.447332 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-26 04:47:31.448290 | orchestrator | Monday 26 May 2025 04:47:31 +0000 (0:00:00.497) 0:00:39.664 ************ 2025-05-26 04:47:31.954886 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:31.955214 | orchestrator | 2025-05-26 04:47:31.956143 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-26 04:47:31.957259 | orchestrator | Monday 26 May 2025 04:47:31 +0000 (0:00:00.510) 0:00:40.175 ************ 2025-05-26 04:47:32.091714 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:32.092428 | orchestrator | 2025-05-26 04:47:32.093376 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-26 04:47:32.094632 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.136) 0:00:40.312 ************ 2025-05-26 04:47:32.212292 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:32.212686 | orchestrator | 2025-05-26 04:47:32.213424 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-26 04:47:32.214248 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.119) 0:00:40.432 ************ 2025-05-26 04:47:32.322084 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:32.323775 | orchestrator | 2025-05-26 04:47:32.324599 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-26 04:47:32.325391 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.110) 0:00:40.542 ************ 2025-05-26 04:47:32.465660 | orchestrator | ok: [testbed-node-4] => { 2025-05-26 04:47:32.466803 | orchestrator |  "vgs_report": { 2025-05-26 04:47:32.469578 | orchestrator |  "vg": [] 2025-05-26 04:47:32.470704 | orchestrator |  } 2025-05-26 04:47:32.471639 | orchestrator | } 2025-05-26 04:47:32.472212 | orchestrator | 2025-05-26 04:47:32.472890 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-26 04:47:32.473665 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.141) 0:00:40.684 ************ 2025-05-26 04:47:32.598417 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:32.598558 | orchestrator | 2025-05-26 04:47:32.599199 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-26 04:47:32.599828 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.132) 0:00:40.817 ************ 2025-05-26 04:47:32.732494 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:32.732734 | orchestrator | 2025-05-26 04:47:32.733629 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-26 04:47:32.734699 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.135) 0:00:40.952 ************ 2025-05-26 04:47:32.864434 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:32.864690 | orchestrator | 2025-05-26 04:47:32.865701 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-26 04:47:32.866469 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.131) 0:00:41.084 ************ 2025-05-26 04:47:32.999929 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:33.002399 | orchestrator | 2025-05-26 04:47:33.003017 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-26 04:47:33.003537 | orchestrator | Monday 26 May 2025 04:47:32 +0000 (0:00:00.134) 0:00:41.219 ************ 2025-05-26 04:47:33.136721 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:33.137451 | orchestrator | 2025-05-26 04:47:33.139833 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-26 04:47:33.139863 | orchestrator | Monday 26 May 2025 04:47:33 +0000 (0:00:00.136) 0:00:41.356 ************ 2025-05-26 04:47:33.476610 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:33.476766 | orchestrator | 2025-05-26 04:47:33.477176 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-26 04:47:33.477432 | orchestrator | Monday 26 May 2025 04:47:33 +0000 (0:00:00.338) 0:00:41.694 ************ 2025-05-26 04:47:33.612877 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:33.613662 | orchestrator | 2025-05-26 04:47:33.614100 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-26 04:47:33.615186 | orchestrator | Monday 26 May 2025 04:47:33 +0000 (0:00:00.135) 0:00:41.830 ************ 2025-05-26 04:47:33.739021 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:33.739936 | orchestrator | 2025-05-26 04:47:33.740907 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-26 04:47:33.741580 | orchestrator | Monday 26 May 2025 04:47:33 +0000 (0:00:00.128) 0:00:41.959 ************ 2025-05-26 04:47:33.883034 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:33.883192 | orchestrator | 2025-05-26 04:47:33.884724 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-26 04:47:33.885839 | orchestrator | Monday 26 May 2025 04:47:33 +0000 (0:00:00.142) 0:00:42.101 ************ 2025-05-26 04:47:34.015563 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:34.017527 | orchestrator | 2025-05-26 04:47:34.018750 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-26 04:47:34.019551 | orchestrator | Monday 26 May 2025 04:47:34 +0000 (0:00:00.132) 0:00:42.234 ************ 2025-05-26 04:47:34.175248 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:34.177004 | orchestrator | 2025-05-26 04:47:34.177042 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-26 04:47:34.177109 | orchestrator | Monday 26 May 2025 04:47:34 +0000 (0:00:00.158) 0:00:42.392 ************ 2025-05-26 04:47:34.306976 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:34.307212 | orchestrator | 2025-05-26 04:47:34.308621 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-26 04:47:34.309405 | orchestrator | Monday 26 May 2025 04:47:34 +0000 (0:00:00.133) 0:00:42.526 ************ 2025-05-26 04:47:34.441835 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:34.443621 | orchestrator | 2025-05-26 04:47:34.444393 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-26 04:47:34.445565 | orchestrator | Monday 26 May 2025 04:47:34 +0000 (0:00:00.133) 0:00:42.659 ************ 2025-05-26 04:47:34.582333 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:34.583241 | orchestrator | 2025-05-26 04:47:34.584322 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-26 04:47:34.585374 | orchestrator | Monday 26 May 2025 04:47:34 +0000 (0:00:00.142) 0:00:42.802 ************ 2025-05-26 04:47:34.736989 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:34.737357 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:34.738114 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:34.739131 | orchestrator | 2025-05-26 04:47:34.740117 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-26 04:47:34.740635 | orchestrator | Monday 26 May 2025 04:47:34 +0000 (0:00:00.153) 0:00:42.955 ************ 2025-05-26 04:47:34.910210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:34.910616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:34.910941 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:34.911493 | orchestrator | 2025-05-26 04:47:34.911829 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-26 04:47:34.912350 | orchestrator | Monday 26 May 2025 04:47:34 +0000 (0:00:00.175) 0:00:43.131 ************ 2025-05-26 04:47:35.052291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:35.053187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:35.055018 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:35.055102 | orchestrator | 2025-05-26 04:47:35.055120 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-26 04:47:35.055711 | orchestrator | Monday 26 May 2025 04:47:35 +0000 (0:00:00.140) 0:00:43.272 ************ 2025-05-26 04:47:35.370699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:35.370849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:35.371736 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:35.372588 | orchestrator | 2025-05-26 04:47:35.373307 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-26 04:47:35.374216 | orchestrator | Monday 26 May 2025 04:47:35 +0000 (0:00:00.319) 0:00:43.591 ************ 2025-05-26 04:47:35.526933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:35.527120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:35.527744 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:35.528207 | orchestrator | 2025-05-26 04:47:35.528881 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-26 04:47:35.529109 | orchestrator | Monday 26 May 2025 04:47:35 +0000 (0:00:00.153) 0:00:43.744 ************ 2025-05-26 04:47:35.675787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:35.675951 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:35.676194 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:35.677038 | orchestrator | 2025-05-26 04:47:35.677809 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-26 04:47:35.678380 | orchestrator | Monday 26 May 2025 04:47:35 +0000 (0:00:00.150) 0:00:43.895 ************ 2025-05-26 04:47:35.837653 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:35.837748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:35.838624 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:35.839122 | orchestrator | 2025-05-26 04:47:35.839892 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-26 04:47:35.840598 | orchestrator | Monday 26 May 2025 04:47:35 +0000 (0:00:00.161) 0:00:44.056 ************ 2025-05-26 04:47:35.988137 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:35.988354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:35.989244 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:35.989951 | orchestrator | 2025-05-26 04:47:35.990665 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-26 04:47:35.992057 | orchestrator | Monday 26 May 2025 04:47:35 +0000 (0:00:00.151) 0:00:44.207 ************ 2025-05-26 04:47:36.504077 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:36.504588 | orchestrator | 2025-05-26 04:47:36.505114 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-26 04:47:36.507346 | orchestrator | Monday 26 May 2025 04:47:36 +0000 (0:00:00.514) 0:00:44.722 ************ 2025-05-26 04:47:36.994462 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:36.994840 | orchestrator | 2025-05-26 04:47:36.995432 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-26 04:47:36.996255 | orchestrator | Monday 26 May 2025 04:47:36 +0000 (0:00:00.491) 0:00:45.214 ************ 2025-05-26 04:47:37.157775 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:47:37.158380 | orchestrator | 2025-05-26 04:47:37.160620 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-26 04:47:37.160646 | orchestrator | Monday 26 May 2025 04:47:37 +0000 (0:00:00.159) 0:00:45.374 ************ 2025-05-26 04:47:37.308758 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'vg_name': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'}) 2025-05-26 04:47:37.308856 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'vg_name': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'}) 2025-05-26 04:47:37.308957 | orchestrator | 2025-05-26 04:47:37.309715 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-26 04:47:37.310447 | orchestrator | Monday 26 May 2025 04:47:37 +0000 (0:00:00.154) 0:00:45.528 ************ 2025-05-26 04:47:37.455721 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:37.456296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:37.457325 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:37.457770 | orchestrator | 2025-05-26 04:47:37.459005 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-26 04:47:37.459755 | orchestrator | Monday 26 May 2025 04:47:37 +0000 (0:00:00.147) 0:00:45.676 ************ 2025-05-26 04:47:37.616601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:37.617240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:37.618105 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:37.618816 | orchestrator | 2025-05-26 04:47:37.619670 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-26 04:47:37.620446 | orchestrator | Monday 26 May 2025 04:47:37 +0000 (0:00:00.158) 0:00:45.835 ************ 2025-05-26 04:47:37.765085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7', 'data_vg': 'ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7'})  2025-05-26 04:47:37.765303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d', 'data_vg': 'ceph-4b512bc6-244a-59a0-9a87-47140e1f057d'})  2025-05-26 04:47:37.766121 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:47:37.766691 | orchestrator | 2025-05-26 04:47:37.767735 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-26 04:47:37.768418 | orchestrator | Monday 26 May 2025 04:47:37 +0000 (0:00:00.149) 0:00:45.984 ************ 2025-05-26 04:47:38.226369 | orchestrator | ok: [testbed-node-4] => { 2025-05-26 04:47:38.226954 | orchestrator |  "lvm_report": { 2025-05-26 04:47:38.228207 | orchestrator |  "lv": [ 2025-05-26 04:47:38.229238 | orchestrator |  { 2025-05-26 04:47:38.230052 | orchestrator |  "lv_name": "osd-block-4b512bc6-244a-59a0-9a87-47140e1f057d", 2025-05-26 04:47:38.230662 | orchestrator |  "vg_name": "ceph-4b512bc6-244a-59a0-9a87-47140e1f057d" 2025-05-26 04:47:38.231719 | orchestrator |  }, 2025-05-26 04:47:38.232611 | orchestrator |  { 2025-05-26 04:47:38.233076 | orchestrator |  "lv_name": "osd-block-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7", 2025-05-26 04:47:38.233637 | orchestrator |  "vg_name": "ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7" 2025-05-26 04:47:38.234319 | orchestrator |  } 2025-05-26 04:47:38.234849 | orchestrator |  ], 2025-05-26 04:47:38.235267 | orchestrator |  "pv": [ 2025-05-26 04:47:38.235791 | orchestrator |  { 2025-05-26 04:47:38.236293 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-26 04:47:38.236856 | orchestrator |  "vg_name": "ceph-8ec7e06f-bb0b-5d64-9f74-70f52e848cb7" 2025-05-26 04:47:38.237443 | orchestrator |  }, 2025-05-26 04:47:38.237763 | orchestrator |  { 2025-05-26 04:47:38.238338 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-26 04:47:38.238915 | orchestrator |  "vg_name": "ceph-4b512bc6-244a-59a0-9a87-47140e1f057d" 2025-05-26 04:47:38.239600 | orchestrator |  } 2025-05-26 04:47:38.240009 | orchestrator |  ] 2025-05-26 04:47:38.240365 | orchestrator |  } 2025-05-26 04:47:38.240806 | orchestrator | } 2025-05-26 04:47:38.241132 | orchestrator | 2025-05-26 04:47:38.241556 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-26 04:47:38.242229 | orchestrator | 2025-05-26 04:47:38.242643 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-26 04:47:38.242902 | orchestrator | Monday 26 May 2025 04:47:38 +0000 (0:00:00.461) 0:00:46.446 ************ 2025-05-26 04:47:38.456124 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-26 04:47:38.456311 | orchestrator | 2025-05-26 04:47:38.456619 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-26 04:47:38.458517 | orchestrator | Monday 26 May 2025 04:47:38 +0000 (0:00:00.230) 0:00:46.676 ************ 2025-05-26 04:47:38.687712 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:47:38.687862 | orchestrator | 2025-05-26 04:47:38.688618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:38.689100 | orchestrator | Monday 26 May 2025 04:47:38 +0000 (0:00:00.230) 0:00:46.907 ************ 2025-05-26 04:47:39.094852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-26 04:47:39.095414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-26 04:47:39.097065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-26 04:47:39.098742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-26 04:47:39.099057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-26 04:47:39.099818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-26 04:47:39.099938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-26 04:47:39.100774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-26 04:47:39.101266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-26 04:47:39.101720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-26 04:47:39.101984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-26 04:47:39.102356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-26 04:47:39.102809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-26 04:47:39.103217 | orchestrator | 2025-05-26 04:47:39.103622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:39.103931 | orchestrator | Monday 26 May 2025 04:47:39 +0000 (0:00:00.407) 0:00:47.314 ************ 2025-05-26 04:47:39.292171 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:39.292257 | orchestrator | 2025-05-26 04:47:39.292579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:39.293172 | orchestrator | Monday 26 May 2025 04:47:39 +0000 (0:00:00.196) 0:00:47.511 ************ 2025-05-26 04:47:39.475711 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:39.476580 | orchestrator | 2025-05-26 04:47:39.476714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:39.477188 | orchestrator | Monday 26 May 2025 04:47:39 +0000 (0:00:00.184) 0:00:47.695 ************ 2025-05-26 04:47:39.675535 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:39.677621 | orchestrator | 2025-05-26 04:47:39.677649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:39.677836 | orchestrator | Monday 26 May 2025 04:47:39 +0000 (0:00:00.198) 0:00:47.894 ************ 2025-05-26 04:47:39.893264 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:39.893365 | orchestrator | 2025-05-26 04:47:39.893440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:39.893776 | orchestrator | Monday 26 May 2025 04:47:39 +0000 (0:00:00.218) 0:00:48.113 ************ 2025-05-26 04:47:40.085202 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:40.085304 | orchestrator | 2025-05-26 04:47:40.085318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:40.085331 | orchestrator | Monday 26 May 2025 04:47:40 +0000 (0:00:00.188) 0:00:48.301 ************ 2025-05-26 04:47:40.650275 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:40.651329 | orchestrator | 2025-05-26 04:47:40.652320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:40.653193 | orchestrator | Monday 26 May 2025 04:47:40 +0000 (0:00:00.568) 0:00:48.870 ************ 2025-05-26 04:47:40.863147 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:40.864010 | orchestrator | 2025-05-26 04:47:40.864322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:40.865094 | orchestrator | Monday 26 May 2025 04:47:40 +0000 (0:00:00.213) 0:00:49.083 ************ 2025-05-26 04:47:41.094587 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:41.094768 | orchestrator | 2025-05-26 04:47:41.095337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:41.095755 | orchestrator | Monday 26 May 2025 04:47:41 +0000 (0:00:00.231) 0:00:49.314 ************ 2025-05-26 04:47:41.537742 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a22053f6-7fcf-48d3-9817-9fbbcd6d287f) 2025-05-26 04:47:41.537847 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a22053f6-7fcf-48d3-9817-9fbbcd6d287f) 2025-05-26 04:47:41.537862 | orchestrator | 2025-05-26 04:47:41.538137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:41.540428 | orchestrator | Monday 26 May 2025 04:47:41 +0000 (0:00:00.442) 0:00:49.757 ************ 2025-05-26 04:47:41.947688 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8267a69f-7007-4a62-b03d-616d3aa09f53) 2025-05-26 04:47:41.947879 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8267a69f-7007-4a62-b03d-616d3aa09f53) 2025-05-26 04:47:41.948407 | orchestrator | 2025-05-26 04:47:41.949098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:41.950149 | orchestrator | Monday 26 May 2025 04:47:41 +0000 (0:00:00.408) 0:00:50.166 ************ 2025-05-26 04:47:42.373565 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_21cb62ce-763a-41a7-95e4-caebeb5b0a4b) 2025-05-26 04:47:42.373672 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_21cb62ce-763a-41a7-95e4-caebeb5b0a4b) 2025-05-26 04:47:42.374195 | orchestrator | 2025-05-26 04:47:42.374779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:42.375609 | orchestrator | Monday 26 May 2025 04:47:42 +0000 (0:00:00.427) 0:00:50.593 ************ 2025-05-26 04:47:42.819008 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ae6d7dd5-5925-42d7-939c-6a68dbf2df83) 2025-05-26 04:47:42.819101 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ae6d7dd5-5925-42d7-939c-6a68dbf2df83) 2025-05-26 04:47:42.819117 | orchestrator | 2025-05-26 04:47:42.819323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-26 04:47:42.819681 | orchestrator | Monday 26 May 2025 04:47:42 +0000 (0:00:00.444) 0:00:51.038 ************ 2025-05-26 04:47:43.155736 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-26 04:47:43.156127 | orchestrator | 2025-05-26 04:47:43.156172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:43.156805 | orchestrator | Monday 26 May 2025 04:47:43 +0000 (0:00:00.337) 0:00:51.376 ************ 2025-05-26 04:47:43.540698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-26 04:47:43.540853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-26 04:47:43.542004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-26 04:47:43.542543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-26 04:47:43.544204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-26 04:47:43.546217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-26 04:47:43.548971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-26 04:47:43.549596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-26 04:47:43.550613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-26 04:47:43.551089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-26 04:47:43.552104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-26 04:47:43.552526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-26 04:47:43.553310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-26 04:47:43.554185 | orchestrator | 2025-05-26 04:47:43.555391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:43.555869 | orchestrator | Monday 26 May 2025 04:47:43 +0000 (0:00:00.383) 0:00:51.760 ************ 2025-05-26 04:47:43.740374 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:43.742180 | orchestrator | 2025-05-26 04:47:43.742735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:43.744094 | orchestrator | Monday 26 May 2025 04:47:43 +0000 (0:00:00.198) 0:00:51.958 ************ 2025-05-26 04:47:43.929038 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:43.929198 | orchestrator | 2025-05-26 04:47:43.929885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:43.931107 | orchestrator | Monday 26 May 2025 04:47:43 +0000 (0:00:00.190) 0:00:52.148 ************ 2025-05-26 04:47:44.539089 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:44.540801 | orchestrator | 2025-05-26 04:47:44.541460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:44.542595 | orchestrator | Monday 26 May 2025 04:47:44 +0000 (0:00:00.610) 0:00:52.759 ************ 2025-05-26 04:47:44.753310 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:44.755200 | orchestrator | 2025-05-26 04:47:44.756038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:44.756660 | orchestrator | Monday 26 May 2025 04:47:44 +0000 (0:00:00.214) 0:00:52.973 ************ 2025-05-26 04:47:44.947638 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:44.947815 | orchestrator | 2025-05-26 04:47:44.948768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:44.949848 | orchestrator | Monday 26 May 2025 04:47:44 +0000 (0:00:00.192) 0:00:53.165 ************ 2025-05-26 04:47:45.131152 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:45.131726 | orchestrator | 2025-05-26 04:47:45.132541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:45.133190 | orchestrator | Monday 26 May 2025 04:47:45 +0000 (0:00:00.185) 0:00:53.351 ************ 2025-05-26 04:47:45.323998 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:45.325048 | orchestrator | 2025-05-26 04:47:45.326231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:45.327324 | orchestrator | Monday 26 May 2025 04:47:45 +0000 (0:00:00.191) 0:00:53.542 ************ 2025-05-26 04:47:45.515231 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:45.516060 | orchestrator | 2025-05-26 04:47:45.516925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:45.517635 | orchestrator | Monday 26 May 2025 04:47:45 +0000 (0:00:00.192) 0:00:53.735 ************ 2025-05-26 04:47:46.133265 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-26 04:47:46.133647 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-26 04:47:46.134682 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-26 04:47:46.137543 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-26 04:47:46.137574 | orchestrator | 2025-05-26 04:47:46.138520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:46.138995 | orchestrator | Monday 26 May 2025 04:47:46 +0000 (0:00:00.615) 0:00:54.351 ************ 2025-05-26 04:47:46.332059 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:46.332387 | orchestrator | 2025-05-26 04:47:46.333849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:46.334957 | orchestrator | Monday 26 May 2025 04:47:46 +0000 (0:00:00.200) 0:00:54.551 ************ 2025-05-26 04:47:46.535354 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:46.536308 | orchestrator | 2025-05-26 04:47:46.537440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:46.538798 | orchestrator | Monday 26 May 2025 04:47:46 +0000 (0:00:00.203) 0:00:54.755 ************ 2025-05-26 04:47:46.738714 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:46.738902 | orchestrator | 2025-05-26 04:47:46.739282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-26 04:47:46.740160 | orchestrator | Monday 26 May 2025 04:47:46 +0000 (0:00:00.201) 0:00:54.956 ************ 2025-05-26 04:47:46.938954 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:46.939065 | orchestrator | 2025-05-26 04:47:46.939082 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-26 04:47:46.940576 | orchestrator | Monday 26 May 2025 04:47:46 +0000 (0:00:00.201) 0:00:55.158 ************ 2025-05-26 04:47:47.262863 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:47.263869 | orchestrator | 2025-05-26 04:47:47.264422 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-26 04:47:47.266246 | orchestrator | Monday 26 May 2025 04:47:47 +0000 (0:00:00.324) 0:00:55.482 ************ 2025-05-26 04:47:47.452810 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2271cd7c-c83a-5004-8392-4222139fb32e'}}) 2025-05-26 04:47:47.453110 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd953f63c-8039-5fa8-9cb1-6d3fed502880'}}) 2025-05-26 04:47:47.453975 | orchestrator | 2025-05-26 04:47:47.454930 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-26 04:47:47.455717 | orchestrator | Monday 26 May 2025 04:47:47 +0000 (0:00:00.190) 0:00:55.672 ************ 2025-05-26 04:47:49.411734 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'}) 2025-05-26 04:47:49.414007 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'}) 2025-05-26 04:47:49.415347 | orchestrator | 2025-05-26 04:47:49.418078 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-26 04:47:49.419080 | orchestrator | Monday 26 May 2025 04:47:49 +0000 (0:00:01.956) 0:00:57.629 ************ 2025-05-26 04:47:49.558932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:49.559667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:49.560605 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:49.562161 | orchestrator | 2025-05-26 04:47:49.562260 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-26 04:47:49.563157 | orchestrator | Monday 26 May 2025 04:47:49 +0000 (0:00:00.149) 0:00:57.779 ************ 2025-05-26 04:47:50.843008 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'}) 2025-05-26 04:47:50.843730 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'}) 2025-05-26 04:47:50.845086 | orchestrator | 2025-05-26 04:47:50.846065 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-26 04:47:50.847226 | orchestrator | Monday 26 May 2025 04:47:50 +0000 (0:00:01.282) 0:00:59.061 ************ 2025-05-26 04:47:50.986887 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:50.987074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:50.990136 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:50.990395 | orchestrator | 2025-05-26 04:47:50.991068 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-26 04:47:50.991544 | orchestrator | Monday 26 May 2025 04:47:50 +0000 (0:00:00.144) 0:00:59.206 ************ 2025-05-26 04:47:51.130796 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:51.130993 | orchestrator | 2025-05-26 04:47:51.131883 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-26 04:47:51.132939 | orchestrator | Monday 26 May 2025 04:47:51 +0000 (0:00:00.143) 0:00:59.350 ************ 2025-05-26 04:47:51.289084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:51.290080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:51.290472 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:51.292659 | orchestrator | 2025-05-26 04:47:51.292690 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-26 04:47:51.292960 | orchestrator | Monday 26 May 2025 04:47:51 +0000 (0:00:00.156) 0:00:59.507 ************ 2025-05-26 04:47:51.430968 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:51.431658 | orchestrator | 2025-05-26 04:47:51.433169 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-26 04:47:51.434807 | orchestrator | Monday 26 May 2025 04:47:51 +0000 (0:00:00.142) 0:00:59.649 ************ 2025-05-26 04:47:51.576072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:51.576245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:51.576682 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:51.577123 | orchestrator | 2025-05-26 04:47:51.578224 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-26 04:47:51.578964 | orchestrator | Monday 26 May 2025 04:47:51 +0000 (0:00:00.144) 0:00:59.794 ************ 2025-05-26 04:47:51.701734 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:51.701966 | orchestrator | 2025-05-26 04:47:51.702781 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-26 04:47:51.703437 | orchestrator | Monday 26 May 2025 04:47:51 +0000 (0:00:00.127) 0:00:59.922 ************ 2025-05-26 04:47:51.852608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:51.852870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:51.853676 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:51.854471 | orchestrator | 2025-05-26 04:47:51.855001 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-26 04:47:51.855358 | orchestrator | Monday 26 May 2025 04:47:51 +0000 (0:00:00.145) 0:01:00.068 ************ 2025-05-26 04:47:52.188759 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:47:52.188859 | orchestrator | 2025-05-26 04:47:52.188875 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-26 04:47:52.189185 | orchestrator | Monday 26 May 2025 04:47:52 +0000 (0:00:00.336) 0:01:00.405 ************ 2025-05-26 04:47:52.346777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:52.346867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:52.346880 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:52.348315 | orchestrator | 2025-05-26 04:47:52.349331 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-26 04:47:52.349808 | orchestrator | Monday 26 May 2025 04:47:52 +0000 (0:00:00.158) 0:01:00.563 ************ 2025-05-26 04:47:52.492961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:52.493058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:52.493796 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:52.495388 | orchestrator | 2025-05-26 04:47:52.495873 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-26 04:47:52.496285 | orchestrator | Monday 26 May 2025 04:47:52 +0000 (0:00:00.145) 0:01:00.709 ************ 2025-05-26 04:47:52.638828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:52.639816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:52.641665 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:52.643200 | orchestrator | 2025-05-26 04:47:52.643243 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-26 04:47:52.643732 | orchestrator | Monday 26 May 2025 04:47:52 +0000 (0:00:00.149) 0:01:00.859 ************ 2025-05-26 04:47:52.777195 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:52.777765 | orchestrator | 2025-05-26 04:47:52.778957 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-26 04:47:52.780480 | orchestrator | Monday 26 May 2025 04:47:52 +0000 (0:00:00.137) 0:01:00.997 ************ 2025-05-26 04:47:52.906910 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:52.908371 | orchestrator | 2025-05-26 04:47:52.908923 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-26 04:47:52.910393 | orchestrator | Monday 26 May 2025 04:47:52 +0000 (0:00:00.130) 0:01:01.127 ************ 2025-05-26 04:47:53.038919 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:53.039818 | orchestrator | 2025-05-26 04:47:53.041267 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-26 04:47:53.042746 | orchestrator | Monday 26 May 2025 04:47:53 +0000 (0:00:00.130) 0:01:01.258 ************ 2025-05-26 04:47:53.188012 | orchestrator | ok: [testbed-node-5] => { 2025-05-26 04:47:53.188930 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-26 04:47:53.190390 | orchestrator | } 2025-05-26 04:47:53.190742 | orchestrator | 2025-05-26 04:47:53.191662 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-26 04:47:53.191994 | orchestrator | Monday 26 May 2025 04:47:53 +0000 (0:00:00.146) 0:01:01.404 ************ 2025-05-26 04:47:53.324074 | orchestrator | ok: [testbed-node-5] => { 2025-05-26 04:47:53.325524 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-26 04:47:53.326935 | orchestrator | } 2025-05-26 04:47:53.327294 | orchestrator | 2025-05-26 04:47:53.328356 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-26 04:47:53.328935 | orchestrator | Monday 26 May 2025 04:47:53 +0000 (0:00:00.138) 0:01:01.543 ************ 2025-05-26 04:47:53.445119 | orchestrator | ok: [testbed-node-5] => { 2025-05-26 04:47:53.446014 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-26 04:47:53.446751 | orchestrator | } 2025-05-26 04:47:53.448866 | orchestrator | 2025-05-26 04:47:53.448891 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-26 04:47:53.449732 | orchestrator | Monday 26 May 2025 04:47:53 +0000 (0:00:00.122) 0:01:01.665 ************ 2025-05-26 04:47:53.931695 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:47:53.932640 | orchestrator | 2025-05-26 04:47:53.934285 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-26 04:47:53.934334 | orchestrator | Monday 26 May 2025 04:47:53 +0000 (0:00:00.485) 0:01:02.150 ************ 2025-05-26 04:47:54.432705 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:47:54.432886 | orchestrator | 2025-05-26 04:47:54.433987 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-26 04:47:54.434742 | orchestrator | Monday 26 May 2025 04:47:54 +0000 (0:00:00.501) 0:01:02.652 ************ 2025-05-26 04:47:55.128012 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:47:55.129071 | orchestrator | 2025-05-26 04:47:55.130427 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-26 04:47:55.131316 | orchestrator | Monday 26 May 2025 04:47:55 +0000 (0:00:00.694) 0:01:03.346 ************ 2025-05-26 04:47:55.262083 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:47:55.262872 | orchestrator | 2025-05-26 04:47:55.264220 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-26 04:47:55.264883 | orchestrator | Monday 26 May 2025 04:47:55 +0000 (0:00:00.135) 0:01:03.482 ************ 2025-05-26 04:47:55.372072 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:55.374590 | orchestrator | 2025-05-26 04:47:55.374625 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-26 04:47:55.374942 | orchestrator | Monday 26 May 2025 04:47:55 +0000 (0:00:00.109) 0:01:03.592 ************ 2025-05-26 04:47:55.498329 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:55.499274 | orchestrator | 2025-05-26 04:47:55.499887 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-26 04:47:55.501038 | orchestrator | Monday 26 May 2025 04:47:55 +0000 (0:00:00.126) 0:01:03.718 ************ 2025-05-26 04:47:55.634696 | orchestrator | ok: [testbed-node-5] => { 2025-05-26 04:47:55.634796 | orchestrator |  "vgs_report": { 2025-05-26 04:47:55.635200 | orchestrator |  "vg": [] 2025-05-26 04:47:55.635267 | orchestrator |  } 2025-05-26 04:47:55.636257 | orchestrator | } 2025-05-26 04:47:55.637015 | orchestrator | 2025-05-26 04:47:55.637375 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-26 04:47:55.637955 | orchestrator | Monday 26 May 2025 04:47:55 +0000 (0:00:00.135) 0:01:03.854 ************ 2025-05-26 04:47:55.771778 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:55.772656 | orchestrator | 2025-05-26 04:47:55.773221 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-26 04:47:55.774373 | orchestrator | Monday 26 May 2025 04:47:55 +0000 (0:00:00.137) 0:01:03.992 ************ 2025-05-26 04:47:55.903701 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:55.904048 | orchestrator | 2025-05-26 04:47:55.904734 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-26 04:47:55.905737 | orchestrator | Monday 26 May 2025 04:47:55 +0000 (0:00:00.131) 0:01:04.123 ************ 2025-05-26 04:47:56.035122 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:56.035940 | orchestrator | 2025-05-26 04:47:56.037077 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-26 04:47:56.038125 | orchestrator | Monday 26 May 2025 04:47:56 +0000 (0:00:00.131) 0:01:04.255 ************ 2025-05-26 04:47:56.161546 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:56.161786 | orchestrator | 2025-05-26 04:47:56.163289 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-26 04:47:56.164944 | orchestrator | Monday 26 May 2025 04:47:56 +0000 (0:00:00.125) 0:01:04.380 ************ 2025-05-26 04:47:56.282666 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:56.283206 | orchestrator | 2025-05-26 04:47:56.284841 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-26 04:47:56.286799 | orchestrator | Monday 26 May 2025 04:47:56 +0000 (0:00:00.122) 0:01:04.502 ************ 2025-05-26 04:47:56.419135 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:56.419712 | orchestrator | 2025-05-26 04:47:56.420843 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-26 04:47:56.421589 | orchestrator | Monday 26 May 2025 04:47:56 +0000 (0:00:00.135) 0:01:04.638 ************ 2025-05-26 04:47:56.560024 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:56.561059 | orchestrator | 2025-05-26 04:47:56.561886 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-26 04:47:56.562493 | orchestrator | Monday 26 May 2025 04:47:56 +0000 (0:00:00.140) 0:01:04.778 ************ 2025-05-26 04:47:56.682989 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:56.683099 | orchestrator | 2025-05-26 04:47:56.684204 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-26 04:47:56.685124 | orchestrator | Monday 26 May 2025 04:47:56 +0000 (0:00:00.123) 0:01:04.902 ************ 2025-05-26 04:47:57.002633 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.002745 | orchestrator | 2025-05-26 04:47:57.003884 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-26 04:47:57.005058 | orchestrator | Monday 26 May 2025 04:47:56 +0000 (0:00:00.319) 0:01:05.222 ************ 2025-05-26 04:47:57.142114 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.143370 | orchestrator | 2025-05-26 04:47:57.144752 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-26 04:47:57.145201 | orchestrator | Monday 26 May 2025 04:47:57 +0000 (0:00:00.138) 0:01:05.361 ************ 2025-05-26 04:47:57.279654 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.280367 | orchestrator | 2025-05-26 04:47:57.281242 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-26 04:47:57.282800 | orchestrator | Monday 26 May 2025 04:47:57 +0000 (0:00:00.138) 0:01:05.499 ************ 2025-05-26 04:47:57.418664 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.419396 | orchestrator | 2025-05-26 04:47:57.419780 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-26 04:47:57.420999 | orchestrator | Monday 26 May 2025 04:47:57 +0000 (0:00:00.139) 0:01:05.639 ************ 2025-05-26 04:47:57.551959 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.552296 | orchestrator | 2025-05-26 04:47:57.554135 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-26 04:47:57.554792 | orchestrator | Monday 26 May 2025 04:47:57 +0000 (0:00:00.131) 0:01:05.770 ************ 2025-05-26 04:47:57.690300 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.690397 | orchestrator | 2025-05-26 04:47:57.690872 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-26 04:47:57.691928 | orchestrator | Monday 26 May 2025 04:47:57 +0000 (0:00:00.136) 0:01:05.907 ************ 2025-05-26 04:47:57.846378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:57.846613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:57.846893 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.847284 | orchestrator | 2025-05-26 04:47:57.847700 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-26 04:47:57.848047 | orchestrator | Monday 26 May 2025 04:47:57 +0000 (0:00:00.159) 0:01:06.067 ************ 2025-05-26 04:47:57.991034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:57.991424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:57.992430 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:57.993399 | orchestrator | 2025-05-26 04:47:57.993860 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-26 04:47:57.994614 | orchestrator | Monday 26 May 2025 04:47:57 +0000 (0:00:00.144) 0:01:06.211 ************ 2025-05-26 04:47:58.137646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:58.138126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:58.138688 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:58.140011 | orchestrator | 2025-05-26 04:47:58.141656 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-26 04:47:58.141680 | orchestrator | Monday 26 May 2025 04:47:58 +0000 (0:00:00.146) 0:01:06.357 ************ 2025-05-26 04:47:58.282939 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:58.283699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:58.284312 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:58.285791 | orchestrator | 2025-05-26 04:47:58.287457 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-26 04:47:58.288077 | orchestrator | Monday 26 May 2025 04:47:58 +0000 (0:00:00.145) 0:01:06.503 ************ 2025-05-26 04:47:58.423932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:58.424096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:58.426824 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:58.427869 | orchestrator | 2025-05-26 04:47:58.428859 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-26 04:47:58.429453 | orchestrator | Monday 26 May 2025 04:47:58 +0000 (0:00:00.139) 0:01:06.643 ************ 2025-05-26 04:47:58.574100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:58.574288 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:58.574679 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:58.575791 | orchestrator | 2025-05-26 04:47:58.578591 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-26 04:47:58.579107 | orchestrator | Monday 26 May 2025 04:47:58 +0000 (0:00:00.150) 0:01:06.793 ************ 2025-05-26 04:47:58.936032 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:58.937637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:58.938520 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:58.940267 | orchestrator | 2025-05-26 04:47:58.940869 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-26 04:47:58.941375 | orchestrator | Monday 26 May 2025 04:47:58 +0000 (0:00:00.361) 0:01:07.155 ************ 2025-05-26 04:47:59.089300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:47:59.090093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:47:59.091218 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:47:59.092105 | orchestrator | 2025-05-26 04:47:59.094427 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-26 04:47:59.094887 | orchestrator | Monday 26 May 2025 04:47:59 +0000 (0:00:00.153) 0:01:07.309 ************ 2025-05-26 04:47:59.582734 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:47:59.583381 | orchestrator | 2025-05-26 04:47:59.584382 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-26 04:47:59.586148 | orchestrator | Monday 26 May 2025 04:47:59 +0000 (0:00:00.493) 0:01:07.802 ************ 2025-05-26 04:48:00.066359 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:48:00.066715 | orchestrator | 2025-05-26 04:48:00.067930 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-26 04:48:00.068770 | orchestrator | Monday 26 May 2025 04:48:00 +0000 (0:00:00.483) 0:01:08.286 ************ 2025-05-26 04:48:00.209433 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:48:00.211614 | orchestrator | 2025-05-26 04:48:00.211651 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-26 04:48:00.212826 | orchestrator | Monday 26 May 2025 04:48:00 +0000 (0:00:00.141) 0:01:08.427 ************ 2025-05-26 04:48:00.382161 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'vg_name': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'}) 2025-05-26 04:48:00.382824 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'vg_name': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'}) 2025-05-26 04:48:00.383880 | orchestrator | 2025-05-26 04:48:00.384905 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-26 04:48:00.385769 | orchestrator | Monday 26 May 2025 04:48:00 +0000 (0:00:00.173) 0:01:08.601 ************ 2025-05-26 04:48:00.523911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:48:00.524069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:48:00.525120 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:48:00.526212 | orchestrator | 2025-05-26 04:48:00.527058 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-26 04:48:00.527669 | orchestrator | Monday 26 May 2025 04:48:00 +0000 (0:00:00.142) 0:01:08.744 ************ 2025-05-26 04:48:00.690302 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:48:00.691239 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:48:00.692346 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:48:00.693571 | orchestrator | 2025-05-26 04:48:00.694767 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-26 04:48:00.695682 | orchestrator | Monday 26 May 2025 04:48:00 +0000 (0:00:00.164) 0:01:08.909 ************ 2025-05-26 04:48:00.835185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2271cd7c-c83a-5004-8392-4222139fb32e', 'data_vg': 'ceph-2271cd7c-c83a-5004-8392-4222139fb32e'})  2025-05-26 04:48:00.837525 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880', 'data_vg': 'ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880'})  2025-05-26 04:48:00.838600 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:48:00.840693 | orchestrator | 2025-05-26 04:48:00.842165 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-26 04:48:00.842906 | orchestrator | Monday 26 May 2025 04:48:00 +0000 (0:00:00.139) 0:01:09.049 ************ 2025-05-26 04:48:00.972855 | orchestrator | ok: [testbed-node-5] => { 2025-05-26 04:48:00.973700 | orchestrator |  "lvm_report": { 2025-05-26 04:48:00.974766 | orchestrator |  "lv": [ 2025-05-26 04:48:00.975793 | orchestrator |  { 2025-05-26 04:48:00.977236 | orchestrator |  "lv_name": "osd-block-2271cd7c-c83a-5004-8392-4222139fb32e", 2025-05-26 04:48:00.977681 | orchestrator |  "vg_name": "ceph-2271cd7c-c83a-5004-8392-4222139fb32e" 2025-05-26 04:48:00.978741 | orchestrator |  }, 2025-05-26 04:48:00.980007 | orchestrator |  { 2025-05-26 04:48:00.980600 | orchestrator |  "lv_name": "osd-block-d953f63c-8039-5fa8-9cb1-6d3fed502880", 2025-05-26 04:48:00.981588 | orchestrator |  "vg_name": "ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880" 2025-05-26 04:48:00.982061 | orchestrator |  } 2025-05-26 04:48:00.982866 | orchestrator |  ], 2025-05-26 04:48:00.983441 | orchestrator |  "pv": [ 2025-05-26 04:48:00.984316 | orchestrator |  { 2025-05-26 04:48:00.984980 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-26 04:48:00.985844 | orchestrator |  "vg_name": "ceph-2271cd7c-c83a-5004-8392-4222139fb32e" 2025-05-26 04:48:00.986482 | orchestrator |  }, 2025-05-26 04:48:00.987433 | orchestrator |  { 2025-05-26 04:48:00.988176 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-26 04:48:00.989051 | orchestrator |  "vg_name": "ceph-d953f63c-8039-5fa8-9cb1-6d3fed502880" 2025-05-26 04:48:00.990114 | orchestrator |  } 2025-05-26 04:48:00.990514 | orchestrator |  ] 2025-05-26 04:48:00.991081 | orchestrator |  } 2025-05-26 04:48:00.991958 | orchestrator | } 2025-05-26 04:48:00.992395 | orchestrator | 2025-05-26 04:48:00.993620 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:48:00.993705 | orchestrator | 2025-05-26 04:48:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 04:48:00.993866 | orchestrator | 2025-05-26 04:48:00 | INFO  | Please wait and do not abort execution. 2025-05-26 04:48:00.994682 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-26 04:48:00.995274 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-26 04:48:00.995713 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-26 04:48:00.996355 | orchestrator | 2025-05-26 04:48:00.997133 | orchestrator | 2025-05-26 04:48:00.997424 | orchestrator | 2025-05-26 04:48:00.998095 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:48:00.998399 | orchestrator | Monday 26 May 2025 04:48:00 +0000 (0:00:00.141) 0:01:09.190 ************ 2025-05-26 04:48:00.999296 | orchestrator | =============================================================================== 2025-05-26 04:48:00.999382 | orchestrator | Create block VGs -------------------------------------------------------- 6.22s 2025-05-26 04:48:00.999974 | orchestrator | Create block LVs -------------------------------------------------------- 4.02s 2025-05-26 04:48:01.000175 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.79s 2025-05-26 04:48:01.000670 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.71s 2025-05-26 04:48:01.000865 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.52s 2025-05-26 04:48:01.001227 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.50s 2025-05-26 04:48:01.001404 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.47s 2025-05-26 04:48:01.001857 | orchestrator | Add known partitions to the list of available block devices ------------- 1.27s 2025-05-26 04:48:01.002308 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-05-26 04:48:01.002552 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-05-26 04:48:01.002653 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2025-05-26 04:48:01.003427 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-05-26 04:48:01.004420 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-05-26 04:48:01.006073 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2025-05-26 04:48:01.006636 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-26 04:48:01.008286 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-05-26 04:48:01.008617 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.65s 2025-05-26 04:48:01.009925 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-05-26 04:48:01.013669 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.64s 2025-05-26 04:48:01.016231 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-05-26 04:48:03.260857 | orchestrator | Registering Redlock._acquired_script 2025-05-26 04:48:03.260964 | orchestrator | Registering Redlock._extend_script 2025-05-26 04:48:03.260979 | orchestrator | Registering Redlock._release_script 2025-05-26 04:48:03.325987 | orchestrator | 2025-05-26 04:48:03 | INFO  | Task b2512729-a3a2-4848-8ce6-893c3c566a20 (facts) was prepared for execution. 2025-05-26 04:48:03.326103 | orchestrator | 2025-05-26 04:48:03 | INFO  | It takes a moment until task b2512729-a3a2-4848-8ce6-893c3c566a20 (facts) has been started and output is visible here. 2025-05-26 04:48:07.350658 | orchestrator | 2025-05-26 04:48:07.350771 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-26 04:48:07.351943 | orchestrator | 2025-05-26 04:48:07.353995 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-26 04:48:07.354222 | orchestrator | Monday 26 May 2025 04:48:07 +0000 (0:00:00.261) 0:00:00.261 ************ 2025-05-26 04:48:08.800095 | orchestrator | ok: [testbed-manager] 2025-05-26 04:48:08.800270 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:48:08.800643 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:48:08.801386 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:48:08.801882 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:48:08.806860 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:48:08.807267 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:48:08.807886 | orchestrator | 2025-05-26 04:48:08.808768 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-26 04:48:08.811701 | orchestrator | Monday 26 May 2025 04:48:08 +0000 (0:00:01.452) 0:00:01.714 ************ 2025-05-26 04:48:08.967889 | orchestrator | skipping: [testbed-manager] 2025-05-26 04:48:09.052553 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:48:09.130789 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:48:09.206930 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:48:09.282542 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:48:09.991631 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:48:09.991843 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:48:09.994181 | orchestrator | 2025-05-26 04:48:09.995275 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-26 04:48:09.998081 | orchestrator | 2025-05-26 04:48:09.999189 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-26 04:48:10.000365 | orchestrator | Monday 26 May 2025 04:48:09 +0000 (0:00:01.193) 0:00:02.908 ************ 2025-05-26 04:48:14.646327 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:48:14.647109 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:48:14.647439 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:48:14.651414 | orchestrator | ok: [testbed-manager] 2025-05-26 04:48:14.651443 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:48:14.651455 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:48:14.651467 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:48:14.651479 | orchestrator | 2025-05-26 04:48:14.651516 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-26 04:48:14.651991 | orchestrator | 2025-05-26 04:48:14.652676 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-26 04:48:14.653620 | orchestrator | Monday 26 May 2025 04:48:14 +0000 (0:00:04.656) 0:00:07.564 ************ 2025-05-26 04:48:14.801571 | orchestrator | skipping: [testbed-manager] 2025-05-26 04:48:14.875388 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:48:14.949907 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:48:15.024794 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:48:15.098710 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:48:15.140838 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:48:15.141151 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:48:15.142223 | orchestrator | 2025-05-26 04:48:15.144426 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:48:15.144467 | orchestrator | 2025-05-26 04:48:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-26 04:48:15.144482 | orchestrator | 2025-05-26 04:48:15 | INFO  | Please wait and do not abort execution. 2025-05-26 04:48:15.144598 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:48:15.145196 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:48:15.146251 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:48:15.146589 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:48:15.147544 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:48:15.147987 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:48:15.148514 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:48:15.149559 | orchestrator | 2025-05-26 04:48:15.150190 | orchestrator | 2025-05-26 04:48:15.151246 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:48:15.151781 | orchestrator | Monday 26 May 2025 04:48:15 +0000 (0:00:00.495) 0:00:08.059 ************ 2025-05-26 04:48:15.152740 | orchestrator | =============================================================================== 2025-05-26 04:48:15.153268 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.66s 2025-05-26 04:48:15.153921 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.45s 2025-05-26 04:48:15.154375 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2025-05-26 04:48:15.154906 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-26 04:48:15.771695 | orchestrator | 2025-05-26 04:48:15.774474 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon May 26 04:48:15 UTC 2025 2025-05-26 04:48:15.774542 | orchestrator | 2025-05-26 04:48:17.465009 | orchestrator | 2025-05-26 04:48:17 | INFO  | Collection nutshell is prepared for execution 2025-05-26 04:48:17.465147 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [0] - dotfiles 2025-05-26 04:48:17.469748 | orchestrator | Registering Redlock._acquired_script 2025-05-26 04:48:17.469834 | orchestrator | Registering Redlock._extend_script 2025-05-26 04:48:17.469848 | orchestrator | Registering Redlock._release_script 2025-05-26 04:48:17.474241 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [0] - homer 2025-05-26 04:48:17.474275 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [0] - netdata 2025-05-26 04:48:17.474288 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [0] - openstackclient 2025-05-26 04:48:17.474299 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [0] - phpmyadmin 2025-05-26 04:48:17.474331 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [0] - common 2025-05-26 04:48:17.476242 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [1] -- loadbalancer 2025-05-26 04:48:17.476365 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [2] --- opensearch 2025-05-26 04:48:17.476379 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [2] --- mariadb-ng 2025-05-26 04:48:17.476458 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [3] ---- horizon 2025-05-26 04:48:17.476473 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [3] ---- keystone 2025-05-26 04:48:17.476484 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [4] ----- neutron 2025-05-26 04:48:17.476527 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [5] ------ wait-for-nova 2025-05-26 04:48:17.476539 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [5] ------ octavia 2025-05-26 04:48:17.477016 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [4] ----- barbican 2025-05-26 04:48:17.477037 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [4] ----- designate 2025-05-26 04:48:17.477095 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [4] ----- ironic 2025-05-26 04:48:17.477552 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [4] ----- placement 2025-05-26 04:48:17.477573 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [4] ----- magnum 2025-05-26 04:48:17.477933 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [1] -- openvswitch 2025-05-26 04:48:17.477953 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [2] --- ovn 2025-05-26 04:48:17.478203 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [1] -- memcached 2025-05-26 04:48:17.478226 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [1] -- redis 2025-05-26 04:48:17.478350 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [1] -- rabbitmq-ng 2025-05-26 04:48:17.478368 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [0] - kubernetes 2025-05-26 04:48:17.479961 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [1] -- kubeconfig 2025-05-26 04:48:17.479982 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [1] -- copy-kubeconfig 2025-05-26 04:48:17.480392 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [0] - ceph 2025-05-26 04:48:17.481590 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [1] -- ceph-pools 2025-05-26 04:48:17.481611 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [2] --- copy-ceph-keys 2025-05-26 04:48:17.481683 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [3] ---- cephclient 2025-05-26 04:48:17.481698 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-26 04:48:17.481709 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [4] ----- wait-for-keystone 2025-05-26 04:48:17.481888 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-26 04:48:17.481908 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [5] ------ glance 2025-05-26 04:48:17.482173 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [5] ------ cinder 2025-05-26 04:48:17.482216 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [5] ------ nova 2025-05-26 04:48:17.482444 | orchestrator | 2025-05-26 04:48:17 | INFO  | A [4] ----- prometheus 2025-05-26 04:48:17.482545 | orchestrator | 2025-05-26 04:48:17 | INFO  | D [5] ------ grafana 2025-05-26 04:48:17.678297 | orchestrator | 2025-05-26 04:48:17 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-26 04:48:17.678403 | orchestrator | 2025-05-26 04:48:17 | INFO  | Tasks are running in the background 2025-05-26 04:48:20.391363 | orchestrator | 2025-05-26 04:48:20 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-26 04:48:22.483051 | orchestrator | 2025-05-26 04:48:22 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:22.483233 | orchestrator | 2025-05-26 04:48:22 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:22.483772 | orchestrator | 2025-05-26 04:48:22 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:22.485708 | orchestrator | 2025-05-26 04:48:22 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:22.486210 | orchestrator | 2025-05-26 04:48:22 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:22.486889 | orchestrator | 2025-05-26 04:48:22 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:22.487473 | orchestrator | 2025-05-26 04:48:22 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:22.487741 | orchestrator | 2025-05-26 04:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:25.525038 | orchestrator | 2025-05-26 04:48:25 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:25.525166 | orchestrator | 2025-05-26 04:48:25 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:25.525242 | orchestrator | 2025-05-26 04:48:25 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:25.526181 | orchestrator | 2025-05-26 04:48:25 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:25.526220 | orchestrator | 2025-05-26 04:48:25 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:25.527752 | orchestrator | 2025-05-26 04:48:25 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:25.528464 | orchestrator | 2025-05-26 04:48:25 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:25.528616 | orchestrator | 2025-05-26 04:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:28.590328 | orchestrator | 2025-05-26 04:48:28 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:28.593589 | orchestrator | 2025-05-26 04:48:28 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:28.594222 | orchestrator | 2025-05-26 04:48:28 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:28.596578 | orchestrator | 2025-05-26 04:48:28 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:28.597515 | orchestrator | 2025-05-26 04:48:28 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:28.601576 | orchestrator | 2025-05-26 04:48:28 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:28.608150 | orchestrator | 2025-05-26 04:48:28 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:28.608232 | orchestrator | 2025-05-26 04:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:31.682327 | orchestrator | 2025-05-26 04:48:31 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:31.682454 | orchestrator | 2025-05-26 04:48:31 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:31.683283 | orchestrator | 2025-05-26 04:48:31 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:31.683318 | orchestrator | 2025-05-26 04:48:31 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:31.684613 | orchestrator | 2025-05-26 04:48:31 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:31.684640 | orchestrator | 2025-05-26 04:48:31 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:31.684652 | orchestrator | 2025-05-26 04:48:31 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:31.684663 | orchestrator | 2025-05-26 04:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:34.732559 | orchestrator | 2025-05-26 04:48:34 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:34.732759 | orchestrator | 2025-05-26 04:48:34 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:34.735055 | orchestrator | 2025-05-26 04:48:34 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:34.737893 | orchestrator | 2025-05-26 04:48:34 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:34.739325 | orchestrator | 2025-05-26 04:48:34 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:34.742209 | orchestrator | 2025-05-26 04:48:34 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:34.743824 | orchestrator | 2025-05-26 04:48:34 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:34.743851 | orchestrator | 2025-05-26 04:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:37.826153 | orchestrator | 2025-05-26 04:48:37 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:37.831081 | orchestrator | 2025-05-26 04:48:37 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:37.831123 | orchestrator | 2025-05-26 04:48:37 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:37.831133 | orchestrator | 2025-05-26 04:48:37 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:37.832704 | orchestrator | 2025-05-26 04:48:37 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:37.832737 | orchestrator | 2025-05-26 04:48:37 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:37.832744 | orchestrator | 2025-05-26 04:48:37 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:37.832752 | orchestrator | 2025-05-26 04:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:40.883536 | orchestrator | 2025-05-26 04:48:40 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:40.885319 | orchestrator | 2025-05-26 04:48:40 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:40.887959 | orchestrator | 2025-05-26 04:48:40 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:40.888279 | orchestrator | 2025-05-26 04:48:40 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:40.890810 | orchestrator | 2025-05-26 04:48:40 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:40.891052 | orchestrator | 2025-05-26 04:48:40 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:40.894419 | orchestrator | 2025-05-26 04:48:40 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:40.894457 | orchestrator | 2025-05-26 04:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:43.968652 | orchestrator | 2025-05-26 04:48:43 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:43.971169 | orchestrator | 2025-05-26 04:48:43 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:43.972375 | orchestrator | 2025-05-26 04:48:43 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:43.975377 | orchestrator | 2025-05-26 04:48:43 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:43.983891 | orchestrator | 2025-05-26 04:48:43 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:43.983961 | orchestrator | 2025-05-26 04:48:43 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:43.991627 | orchestrator | 2025-05-26 04:48:43 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state STARTED 2025-05-26 04:48:43.991695 | orchestrator | 2025-05-26 04:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:47.043226 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:47.045775 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:47.054416 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:47.054586 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:48:47.060588 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:47.063577 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:47.071788 | orchestrator | 2025-05-26 04:48:47.071843 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-26 04:48:47.071858 | orchestrator | 2025-05-26 04:48:47.071869 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-26 04:48:47.071880 | orchestrator | Monday 26 May 2025 04:48:30 +0000 (0:00:00.846) 0:00:00.847 ************ 2025-05-26 04:48:47.071892 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:48:47.071903 | orchestrator | changed: [testbed-manager] 2025-05-26 04:48:47.071914 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:48:47.071925 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:48:47.071935 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:48:47.071946 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:48:47.071957 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:48:47.071968 | orchestrator | 2025-05-26 04:48:47.071978 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-26 04:48:47.071989 | orchestrator | Monday 26 May 2025 04:48:34 +0000 (0:00:04.400) 0:00:05.247 ************ 2025-05-26 04:48:47.072001 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-26 04:48:47.072012 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-26 04:48:47.072048 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-26 04:48:47.072060 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-26 04:48:47.072070 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-26 04:48:47.072081 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-26 04:48:47.072091 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-26 04:48:47.072102 | orchestrator | 2025-05-26 04:48:47.072112 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-26 04:48:47.072123 | orchestrator | Monday 26 May 2025 04:48:36 +0000 (0:00:02.025) 0:00:07.272 ************ 2025-05-26 04:48:47.072138 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-26 04:48:35.355775', 'end': '2025-05-26 04:48:35.360062', 'delta': '0:00:00.004287', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-26 04:48:47.072154 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-26 04:48:35.400829', 'end': '2025-05-26 04:48:35.408987', 'delta': '0:00:00.008158', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-26 04:48:47.072166 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-26 04:48:35.674540', 'end': '2025-05-26 04:48:35.682928', 'delta': '0:00:00.008388', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-26 04:48:47.072211 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-26 04:48:35.881680', 'end': '2025-05-26 04:48:35.890773', 'delta': '0:00:00.009093', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-26 04:48:47.072233 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-26 04:48:36.077589', 'end': '2025-05-26 04:48:36.086741', 'delta': '0:00:00.009152', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-26 04:48:47.072273 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-26 04:48:36.246792', 'end': '2025-05-26 04:48:36.255053', 'delta': '0:00:00.008261', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-26 04:48:47.072285 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-26 04:48:36.335001', 'end': '2025-05-26 04:48:36.346172', 'delta': '0:00:00.011171', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-26 04:48:47.072296 | orchestrator | 2025-05-26 04:48:47.072308 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-26 04:48:47.072319 | orchestrator | Monday 26 May 2025 04:48:39 +0000 (0:00:02.273) 0:00:09.545 ************ 2025-05-26 04:48:47.072329 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-26 04:48:47.072340 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-26 04:48:47.072351 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-26 04:48:47.072362 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-26 04:48:47.072374 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-26 04:48:47.072387 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-26 04:48:47.072400 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-26 04:48:47.072412 | orchestrator | 2025-05-26 04:48:47.072425 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-26 04:48:47.072438 | orchestrator | Monday 26 May 2025 04:48:41 +0000 (0:00:02.354) 0:00:11.899 ************ 2025-05-26 04:48:47.072450 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-26 04:48:47.072463 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-26 04:48:47.072475 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-26 04:48:47.072515 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-26 04:48:47.072527 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-26 04:48:47.072551 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-26 04:48:47.072562 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-26 04:48:47.072573 | orchestrator | 2025-05-26 04:48:47.072584 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:48:47.072603 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:48:47.072616 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:48:47.072627 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:48:47.072638 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:48:47.072649 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:48:47.072659 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:48:47.074421 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:48:47.074461 | orchestrator | 2025-05-26 04:48:47.074473 | orchestrator | 2025-05-26 04:48:47.074515 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:48:47.074534 | orchestrator | Monday 26 May 2025 04:48:45 +0000 (0:00:03.928) 0:00:15.828 ************ 2025-05-26 04:48:47.074554 | orchestrator | =============================================================================== 2025-05-26 04:48:47.074571 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.40s 2025-05-26 04:48:47.074588 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.93s 2025-05-26 04:48:47.074604 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.35s 2025-05-26 04:48:47.074615 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.27s 2025-05-26 04:48:47.074626 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.03s 2025-05-26 04:48:47.074636 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:47.074647 | orchestrator | 2025-05-26 04:48:47 | INFO  | Task 579f3a32-9ef3-4ccb-a4ed-71cf37ee0159 is in state SUCCESS 2025-05-26 04:48:47.074658 | orchestrator | 2025-05-26 04:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:50.120978 | orchestrator | 2025-05-26 04:48:50 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:50.122491 | orchestrator | 2025-05-26 04:48:50 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:50.122588 | orchestrator | 2025-05-26 04:48:50 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:50.124335 | orchestrator | 2025-05-26 04:48:50 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:48:50.125185 | orchestrator | 2025-05-26 04:48:50 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:50.126241 | orchestrator | 2025-05-26 04:48:50 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:50.127631 | orchestrator | 2025-05-26 04:48:50 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:50.127718 | orchestrator | 2025-05-26 04:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:53.187415 | orchestrator | 2025-05-26 04:48:53 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:53.187552 | orchestrator | 2025-05-26 04:48:53 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:53.187568 | orchestrator | 2025-05-26 04:48:53 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:53.195269 | orchestrator | 2025-05-26 04:48:53 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:48:53.195314 | orchestrator | 2025-05-26 04:48:53 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:53.195326 | orchestrator | 2025-05-26 04:48:53 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:53.198590 | orchestrator | 2025-05-26 04:48:53 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:53.198626 | orchestrator | 2025-05-26 04:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:56.239982 | orchestrator | 2025-05-26 04:48:56 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:56.243753 | orchestrator | 2025-05-26 04:48:56 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:56.247745 | orchestrator | 2025-05-26 04:48:56 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:56.247775 | orchestrator | 2025-05-26 04:48:56 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:48:56.247787 | orchestrator | 2025-05-26 04:48:56 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:56.253462 | orchestrator | 2025-05-26 04:48:56 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:56.256380 | orchestrator | 2025-05-26 04:48:56 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:56.256860 | orchestrator | 2025-05-26 04:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:48:59.293150 | orchestrator | 2025-05-26 04:48:59 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state STARTED 2025-05-26 04:48:59.293278 | orchestrator | 2025-05-26 04:48:59 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:48:59.293293 | orchestrator | 2025-05-26 04:48:59 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:48:59.293305 | orchestrator | 2025-05-26 04:48:59 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:48:59.294536 | orchestrator | 2025-05-26 04:48:59 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:48:59.294565 | orchestrator | 2025-05-26 04:48:59 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:48:59.294577 | orchestrator | 2025-05-26 04:48:59 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:48:59.294589 | orchestrator | 2025-05-26 04:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:02.359809 | orchestrator | 2025-05-26 04:49:02 | INFO  | Task fad395e7-b9c7-4365-b9f9-4756646e08e5 is in state SUCCESS 2025-05-26 04:49:02.365228 | orchestrator | 2025-05-26 04:49:02 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:49:02.365287 | orchestrator | 2025-05-26 04:49:02 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:02.366560 | orchestrator | 2025-05-26 04:49:02 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:02.373222 | orchestrator | 2025-05-26 04:49:02 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:02.373249 | orchestrator | 2025-05-26 04:49:02 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:02.373260 | orchestrator | 2025-05-26 04:49:02 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:02.373273 | orchestrator | 2025-05-26 04:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:05.432679 | orchestrator | 2025-05-26 04:49:05 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:49:05.432818 | orchestrator | 2025-05-26 04:49:05 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:05.432860 | orchestrator | 2025-05-26 04:49:05 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:05.432874 | orchestrator | 2025-05-26 04:49:05 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:05.433802 | orchestrator | 2025-05-26 04:49:05 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:05.437164 | orchestrator | 2025-05-26 04:49:05 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:05.437189 | orchestrator | 2025-05-26 04:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:08.480383 | orchestrator | 2025-05-26 04:49:08 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:49:08.483068 | orchestrator | 2025-05-26 04:49:08 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:08.485107 | orchestrator | 2025-05-26 04:49:08 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:08.486829 | orchestrator | 2025-05-26 04:49:08 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:08.488549 | orchestrator | 2025-05-26 04:49:08 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:08.489791 | orchestrator | 2025-05-26 04:49:08 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:08.490085 | orchestrator | 2025-05-26 04:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:11.573950 | orchestrator | 2025-05-26 04:49:11 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:49:11.574100 | orchestrator | 2025-05-26 04:49:11 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:11.574108 | orchestrator | 2025-05-26 04:49:11 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:11.574113 | orchestrator | 2025-05-26 04:49:11 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:11.574117 | orchestrator | 2025-05-26 04:49:11 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:11.574121 | orchestrator | 2025-05-26 04:49:11 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:11.574126 | orchestrator | 2025-05-26 04:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:14.623366 | orchestrator | 2025-05-26 04:49:14 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state STARTED 2025-05-26 04:49:14.624037 | orchestrator | 2025-05-26 04:49:14 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:14.625158 | orchestrator | 2025-05-26 04:49:14 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:14.626192 | orchestrator | 2025-05-26 04:49:14 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:14.627256 | orchestrator | 2025-05-26 04:49:14 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:14.628619 | orchestrator | 2025-05-26 04:49:14 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:14.628658 | orchestrator | 2025-05-26 04:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:17.666668 | orchestrator | 2025-05-26 04:49:17 | INFO  | Task edaf48cc-de53-400b-b959-6f1b11c74b59 is in state SUCCESS 2025-05-26 04:49:17.673489 | orchestrator | 2025-05-26 04:49:17 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:17.676958 | orchestrator | 2025-05-26 04:49:17 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:17.679798 | orchestrator | 2025-05-26 04:49:17 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:17.682223 | orchestrator | 2025-05-26 04:49:17 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:17.683144 | orchestrator | 2025-05-26 04:49:17 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:17.683340 | orchestrator | 2025-05-26 04:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:20.719639 | orchestrator | 2025-05-26 04:49:20 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:20.721344 | orchestrator | 2025-05-26 04:49:20 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:20.722457 | orchestrator | 2025-05-26 04:49:20 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:20.723415 | orchestrator | 2025-05-26 04:49:20 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:20.724231 | orchestrator | 2025-05-26 04:49:20 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:20.724260 | orchestrator | 2025-05-26 04:49:20 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:23.786723 | orchestrator | 2025-05-26 04:49:23 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:23.793793 | orchestrator | 2025-05-26 04:49:23 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:23.800639 | orchestrator | 2025-05-26 04:49:23 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:23.805769 | orchestrator | 2025-05-26 04:49:23 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:23.805845 | orchestrator | 2025-05-26 04:49:23 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:23.805861 | orchestrator | 2025-05-26 04:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:26.854927 | orchestrator | 2025-05-26 04:49:26 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:26.855682 | orchestrator | 2025-05-26 04:49:26 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:26.855698 | orchestrator | 2025-05-26 04:49:26 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state STARTED 2025-05-26 04:49:26.857174 | orchestrator | 2025-05-26 04:49:26 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:26.860271 | orchestrator | 2025-05-26 04:49:26 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:26.860304 | orchestrator | 2025-05-26 04:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:29.902088 | orchestrator | 2025-05-26 04:49:29 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:29.902184 | orchestrator | 2025-05-26 04:49:29 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:29.903277 | orchestrator | 2025-05-26 04:49:29 | INFO  | Task a7e26726-0e5f-4eb6-a990-612b2eae498e is in state SUCCESS 2025-05-26 04:49:29.906886 | orchestrator | 2025-05-26 04:49:29.906958 | orchestrator | 2025-05-26 04:49:29.906966 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-26 04:49:29.906972 | orchestrator | 2025-05-26 04:49:29.906977 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-26 04:49:29.906983 | orchestrator | Monday 26 May 2025 04:48:28 +0000 (0:00:00.600) 0:00:00.600 ************ 2025-05-26 04:49:29.906989 | orchestrator | ok: [testbed-manager] => { 2025-05-26 04:49:29.906996 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-26 04:49:29.907002 | orchestrator | } 2025-05-26 04:49:29.907006 | orchestrator | 2025-05-26 04:49:29.907011 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-26 04:49:29.907016 | orchestrator | Monday 26 May 2025 04:48:29 +0000 (0:00:00.139) 0:00:00.740 ************ 2025-05-26 04:49:29.907021 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.907027 | orchestrator | 2025-05-26 04:49:29.907031 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-26 04:49:29.907036 | orchestrator | Monday 26 May 2025 04:48:30 +0000 (0:00:01.333) 0:00:02.074 ************ 2025-05-26 04:49:29.907040 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-26 04:49:29.907045 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-26 04:49:29.907050 | orchestrator | 2025-05-26 04:49:29.907054 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-26 04:49:29.907059 | orchestrator | Monday 26 May 2025 04:48:32 +0000 (0:00:01.633) 0:00:03.707 ************ 2025-05-26 04:49:29.907063 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907068 | orchestrator | 2025-05-26 04:49:29.907072 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-26 04:49:29.907077 | orchestrator | Monday 26 May 2025 04:48:34 +0000 (0:00:02.256) 0:00:05.964 ************ 2025-05-26 04:49:29.907081 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907086 | orchestrator | 2025-05-26 04:49:29.907090 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-26 04:49:29.907095 | orchestrator | Monday 26 May 2025 04:48:35 +0000 (0:00:01.192) 0:00:07.157 ************ 2025-05-26 04:49:29.907099 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-26 04:49:29.907104 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.907108 | orchestrator | 2025-05-26 04:49:29.907113 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-26 04:49:29.907117 | orchestrator | Monday 26 May 2025 04:48:59 +0000 (0:00:23.887) 0:00:31.045 ************ 2025-05-26 04:49:29.907122 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907126 | orchestrator | 2025-05-26 04:49:29.907131 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:49:29.907140 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.907145 | orchestrator | 2025-05-26 04:49:29.907150 | orchestrator | 2025-05-26 04:49:29.907155 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:49:29.907159 | orchestrator | Monday 26 May 2025 04:49:01 +0000 (0:00:01.762) 0:00:32.807 ************ 2025-05-26 04:49:29.907164 | orchestrator | =============================================================================== 2025-05-26 04:49:29.907182 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.89s 2025-05-26 04:49:29.907187 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.26s 2025-05-26 04:49:29.907191 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.76s 2025-05-26 04:49:29.907196 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.63s 2025-05-26 04:49:29.907200 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.33s 2025-05-26 04:49:29.907205 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.19s 2025-05-26 04:49:29.907209 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.14s 2025-05-26 04:49:29.907214 | orchestrator | 2025-05-26 04:49:29.907218 | orchestrator | 2025-05-26 04:49:29.907223 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-26 04:49:29.907227 | orchestrator | 2025-05-26 04:49:29.907232 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-26 04:49:29.907236 | orchestrator | Monday 26 May 2025 04:48:29 +0000 (0:00:00.725) 0:00:00.725 ************ 2025-05-26 04:49:29.907241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-26 04:49:29.907247 | orchestrator | 2025-05-26 04:49:29.907252 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-26 04:49:29.907256 | orchestrator | Monday 26 May 2025 04:48:30 +0000 (0:00:00.728) 0:00:01.453 ************ 2025-05-26 04:49:29.907261 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-26 04:49:29.907265 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-26 04:49:29.907270 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-26 04:49:29.907274 | orchestrator | 2025-05-26 04:49:29.907279 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-26 04:49:29.907283 | orchestrator | Monday 26 May 2025 04:48:31 +0000 (0:00:01.745) 0:00:03.198 ************ 2025-05-26 04:49:29.907288 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907292 | orchestrator | 2025-05-26 04:49:29.907297 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-26 04:49:29.907301 | orchestrator | Monday 26 May 2025 04:48:33 +0000 (0:00:01.532) 0:00:04.731 ************ 2025-05-26 04:49:29.907316 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-26 04:49:29.907321 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.907326 | orchestrator | 2025-05-26 04:49:29.907330 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-26 04:49:29.907335 | orchestrator | Monday 26 May 2025 04:49:10 +0000 (0:00:37.446) 0:00:42.177 ************ 2025-05-26 04:49:29.907339 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907344 | orchestrator | 2025-05-26 04:49:29.907348 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-26 04:49:29.907353 | orchestrator | Monday 26 May 2025 04:49:12 +0000 (0:00:01.393) 0:00:43.571 ************ 2025-05-26 04:49:29.907357 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.907362 | orchestrator | 2025-05-26 04:49:29.907366 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-26 04:49:29.907371 | orchestrator | Monday 26 May 2025 04:49:13 +0000 (0:00:00.907) 0:00:44.479 ************ 2025-05-26 04:49:29.907375 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907380 | orchestrator | 2025-05-26 04:49:29.907384 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-26 04:49:29.907389 | orchestrator | Monday 26 May 2025 04:49:14 +0000 (0:00:01.738) 0:00:46.217 ************ 2025-05-26 04:49:29.907393 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907398 | orchestrator | 2025-05-26 04:49:29.907402 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-26 04:49:29.907410 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:00.848) 0:00:47.066 ************ 2025-05-26 04:49:29.907415 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907419 | orchestrator | 2025-05-26 04:49:29.907424 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-26 04:49:29.907428 | orchestrator | Monday 26 May 2025 04:49:16 +0000 (0:00:00.807) 0:00:47.873 ************ 2025-05-26 04:49:29.907433 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.907438 | orchestrator | 2025-05-26 04:49:29.907444 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:49:29.907449 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.907454 | orchestrator | 2025-05-26 04:49:29.907459 | orchestrator | 2025-05-26 04:49:29.907482 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:49:29.907488 | orchestrator | Monday 26 May 2025 04:49:16 +0000 (0:00:00.381) 0:00:48.255 ************ 2025-05-26 04:49:29.907493 | orchestrator | =============================================================================== 2025-05-26 04:49:29.907498 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.45s 2025-05-26 04:49:29.907503 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.75s 2025-05-26 04:49:29.907511 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.74s 2025-05-26 04:49:29.907516 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.53s 2025-05-26 04:49:29.907522 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.39s 2025-05-26 04:49:29.907527 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.91s 2025-05-26 04:49:29.907532 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.85s 2025-05-26 04:49:29.907537 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.81s 2025-05-26 04:49:29.907542 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.73s 2025-05-26 04:49:29.907547 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2025-05-26 04:49:29.907553 | orchestrator | 2025-05-26 04:49:29.907558 | orchestrator | 2025-05-26 04:49:29.907563 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 04:49:29.907568 | orchestrator | 2025-05-26 04:49:29.907573 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-26 04:49:29.907579 | orchestrator | Monday 26 May 2025 04:48:28 +0000 (0:00:00.380) 0:00:00.380 ************ 2025-05-26 04:49:29.907584 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-26 04:49:29.907589 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-26 04:49:29.907594 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-26 04:49:29.907600 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-26 04:49:29.907605 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-26 04:49:29.907610 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-26 04:49:29.907615 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-26 04:49:29.907621 | orchestrator | 2025-05-26 04:49:29.907626 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-26 04:49:29.907631 | orchestrator | 2025-05-26 04:49:29.907636 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-26 04:49:29.907642 | orchestrator | Monday 26 May 2025 04:48:30 +0000 (0:00:01.781) 0:00:02.162 ************ 2025-05-26 04:49:29.907656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:49:29.907666 | orchestrator | 2025-05-26 04:49:29.907672 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-26 04:49:29.907677 | orchestrator | Monday 26 May 2025 04:48:33 +0000 (0:00:03.356) 0:00:05.518 ************ 2025-05-26 04:49:29.907682 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.907687 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:49:29.907692 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:49:29.907698 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:49:29.907703 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:49:29.907711 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:49:29.907716 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:49:29.907721 | orchestrator | 2025-05-26 04:49:29.907727 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-26 04:49:29.907732 | orchestrator | Monday 26 May 2025 04:48:35 +0000 (0:00:01.835) 0:00:07.354 ************ 2025-05-26 04:49:29.907737 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.907742 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:49:29.907747 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:49:29.907752 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:49:29.907757 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:49:29.907763 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:49:29.907768 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:49:29.907773 | orchestrator | 2025-05-26 04:49:29.907778 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-26 04:49:29.907783 | orchestrator | Monday 26 May 2025 04:48:39 +0000 (0:00:03.583) 0:00:10.937 ************ 2025-05-26 04:49:29.907788 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907794 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:49:29.907799 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:49:29.907804 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:49:29.907810 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:49:29.907815 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:49:29.907820 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:49:29.907826 | orchestrator | 2025-05-26 04:49:29.907830 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-26 04:49:29.907835 | orchestrator | Monday 26 May 2025 04:48:41 +0000 (0:00:02.798) 0:00:13.736 ************ 2025-05-26 04:49:29.907839 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907844 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:49:29.907848 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:49:29.907853 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:49:29.907857 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:49:29.907862 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:49:29.907866 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:49:29.907870 | orchestrator | 2025-05-26 04:49:29.907875 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-26 04:49:29.907880 | orchestrator | Monday 26 May 2025 04:48:51 +0000 (0:00:09.415) 0:00:23.152 ************ 2025-05-26 04:49:29.907884 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:49:29.907888 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:49:29.907893 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:49:29.907897 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:49:29.907902 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:49:29.907906 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:49:29.907911 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.907915 | orchestrator | 2025-05-26 04:49:29.907920 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-26 04:49:29.907924 | orchestrator | Monday 26 May 2025 04:49:07 +0000 (0:00:16.175) 0:00:39.327 ************ 2025-05-26 04:49:29.907930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:49:29.907936 | orchestrator | 2025-05-26 04:49:29.907941 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-26 04:49:29.907953 | orchestrator | Monday 26 May 2025 04:49:08 +0000 (0:00:01.172) 0:00:40.499 ************ 2025-05-26 04:49:29.907958 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-26 04:49:29.907962 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-26 04:49:29.907988 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-26 04:49:29.907993 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-26 04:49:29.907998 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-26 04:49:29.908002 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-26 04:49:29.908007 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-26 04:49:29.908011 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-26 04:49:29.908016 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-26 04:49:29.908020 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-26 04:49:29.908025 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-26 04:49:29.908029 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-26 04:49:29.908034 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-26 04:49:29.908038 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-26 04:49:29.908043 | orchestrator | 2025-05-26 04:49:29.908047 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-26 04:49:29.908052 | orchestrator | Monday 26 May 2025 04:49:14 +0000 (0:00:05.625) 0:00:46.125 ************ 2025-05-26 04:49:29.908056 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.908061 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:49:29.908066 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:49:29.908071 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:49:29.908075 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:49:29.908080 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:49:29.908084 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:49:29.908089 | orchestrator | 2025-05-26 04:49:29.908093 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-26 04:49:29.908098 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:01.184) 0:00:47.309 ************ 2025-05-26 04:49:29.908102 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.908107 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:49:29.908111 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:49:29.908116 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:49:29.908122 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:49:29.908127 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:49:29.908132 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:49:29.908149 | orchestrator | 2025-05-26 04:49:29.908155 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-26 04:49:29.908163 | orchestrator | Monday 26 May 2025 04:49:17 +0000 (0:00:01.877) 0:00:49.187 ************ 2025-05-26 04:49:29.908168 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.908172 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:49:29.908177 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:49:29.908181 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:49:29.908186 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:49:29.908190 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:49:29.908195 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:49:29.908199 | orchestrator | 2025-05-26 04:49:29.908204 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-26 04:49:29.908208 | orchestrator | Monday 26 May 2025 04:49:19 +0000 (0:00:01.696) 0:00:50.884 ************ 2025-05-26 04:49:29.908213 | orchestrator | ok: [testbed-manager] 2025-05-26 04:49:29.908217 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:49:29.908222 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:49:29.908226 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:49:29.908231 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:49:29.908236 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:49:29.908246 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:49:29.908252 | orchestrator | 2025-05-26 04:49:29.908257 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-26 04:49:29.908262 | orchestrator | Monday 26 May 2025 04:49:20 +0000 (0:00:01.404) 0:00:52.288 ************ 2025-05-26 04:49:29.908266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-26 04:49:29.908274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:49:29.908280 | orchestrator | 2025-05-26 04:49:29.908286 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-26 04:49:29.908292 | orchestrator | Monday 26 May 2025 04:49:21 +0000 (0:00:01.262) 0:00:53.551 ************ 2025-05-26 04:49:29.908297 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.908302 | orchestrator | 2025-05-26 04:49:29.908307 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-26 04:49:29.908312 | orchestrator | Monday 26 May 2025 04:49:23 +0000 (0:00:01.630) 0:00:55.182 ************ 2025-05-26 04:49:29.908318 | orchestrator | changed: [testbed-manager] 2025-05-26 04:49:29.908323 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:49:29.908328 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:49:29.908334 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:49:29.908338 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:49:29.908343 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:49:29.908348 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:49:29.908354 | orchestrator | 2025-05-26 04:49:29.908360 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:49:29.908369 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.908376 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.908381 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.908386 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.908390 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.908395 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.908401 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:49:29.908407 | orchestrator | 2025-05-26 04:49:29.908413 | orchestrator | 2025-05-26 04:49:29.908418 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:49:29.908424 | orchestrator | Monday 26 May 2025 04:49:26 +0000 (0:00:03.281) 0:00:58.463 ************ 2025-05-26 04:49:29.908429 | orchestrator | =============================================================================== 2025-05-26 04:49:29.908435 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.18s 2025-05-26 04:49:29.908441 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.41s 2025-05-26 04:49:29.908447 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.63s 2025-05-26 04:49:29.908452 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.58s 2025-05-26 04:49:29.908456 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.36s 2025-05-26 04:49:29.908501 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.28s 2025-05-26 04:49:29.908507 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.80s 2025-05-26 04:49:29.908511 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.88s 2025-05-26 04:49:29.908516 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.83s 2025-05-26 04:49:29.908520 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.78s 2025-05-26 04:49:29.908525 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.70s 2025-05-26 04:49:29.908534 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.63s 2025-05-26 04:49:29.908539 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.40s 2025-05-26 04:49:29.908543 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.26s 2025-05-26 04:49:29.908548 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.18s 2025-05-26 04:49:29.908553 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.17s 2025-05-26 04:49:29.908649 | orchestrator | 2025-05-26 04:49:29 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:29.908659 | orchestrator | 2025-05-26 04:49:29 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:29.908664 | orchestrator | 2025-05-26 04:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:32.958404 | orchestrator | 2025-05-26 04:49:32 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:32.959860 | orchestrator | 2025-05-26 04:49:32 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:32.962463 | orchestrator | 2025-05-26 04:49:32 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:32.964638 | orchestrator | 2025-05-26 04:49:32 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:32.964668 | orchestrator | 2025-05-26 04:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:36.007190 | orchestrator | 2025-05-26 04:49:36 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:36.008571 | orchestrator | 2025-05-26 04:49:36 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:36.009902 | orchestrator | 2025-05-26 04:49:36 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:36.010966 | orchestrator | 2025-05-26 04:49:36 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:36.011051 | orchestrator | 2025-05-26 04:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:39.062859 | orchestrator | 2025-05-26 04:49:39 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:39.063007 | orchestrator | 2025-05-26 04:49:39 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:39.063035 | orchestrator | 2025-05-26 04:49:39 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:39.063166 | orchestrator | 2025-05-26 04:49:39 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:39.063421 | orchestrator | 2025-05-26 04:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:42.107563 | orchestrator | 2025-05-26 04:49:42 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:42.108643 | orchestrator | 2025-05-26 04:49:42 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:42.109686 | orchestrator | 2025-05-26 04:49:42 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:42.111682 | orchestrator | 2025-05-26 04:49:42 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:42.111728 | orchestrator | 2025-05-26 04:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:45.153043 | orchestrator | 2025-05-26 04:49:45 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:45.154795 | orchestrator | 2025-05-26 04:49:45 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:45.154831 | orchestrator | 2025-05-26 04:49:45 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:45.155755 | orchestrator | 2025-05-26 04:49:45 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:45.155764 | orchestrator | 2025-05-26 04:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:48.201523 | orchestrator | 2025-05-26 04:49:48 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:48.202677 | orchestrator | 2025-05-26 04:49:48 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:48.204871 | orchestrator | 2025-05-26 04:49:48 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:48.206739 | orchestrator | 2025-05-26 04:49:48 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:48.206765 | orchestrator | 2025-05-26 04:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:51.337366 | orchestrator | 2025-05-26 04:49:51 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:51.339998 | orchestrator | 2025-05-26 04:49:51 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:51.341989 | orchestrator | 2025-05-26 04:49:51 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:51.347890 | orchestrator | 2025-05-26 04:49:51 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:51.347945 | orchestrator | 2025-05-26 04:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:54.482958 | orchestrator | 2025-05-26 04:49:54 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:54.488736 | orchestrator | 2025-05-26 04:49:54 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:54.491248 | orchestrator | 2025-05-26 04:49:54 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:54.493734 | orchestrator | 2025-05-26 04:49:54 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:54.496609 | orchestrator | 2025-05-26 04:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:49:57.545675 | orchestrator | 2025-05-26 04:49:57 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:49:57.548588 | orchestrator | 2025-05-26 04:49:57 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:49:57.550732 | orchestrator | 2025-05-26 04:49:57 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:49:57.553150 | orchestrator | 2025-05-26 04:49:57 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:49:57.553212 | orchestrator | 2025-05-26 04:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:00.598684 | orchestrator | 2025-05-26 04:50:00 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:00.600041 | orchestrator | 2025-05-26 04:50:00 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:50:00.601782 | orchestrator | 2025-05-26 04:50:00 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:00.603798 | orchestrator | 2025-05-26 04:50:00 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:00.603854 | orchestrator | 2025-05-26 04:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:03.650607 | orchestrator | 2025-05-26 04:50:03 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:03.651766 | orchestrator | 2025-05-26 04:50:03 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:50:03.653074 | orchestrator | 2025-05-26 04:50:03 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:03.653904 | orchestrator | 2025-05-26 04:50:03 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:03.653921 | orchestrator | 2025-05-26 04:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:06.703280 | orchestrator | 2025-05-26 04:50:06 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:06.704828 | orchestrator | 2025-05-26 04:50:06 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:50:06.706235 | orchestrator | 2025-05-26 04:50:06 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:06.707990 | orchestrator | 2025-05-26 04:50:06 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:06.708123 | orchestrator | 2025-05-26 04:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:09.791763 | orchestrator | 2025-05-26 04:50:09 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:09.792083 | orchestrator | 2025-05-26 04:50:09 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:50:09.793353 | orchestrator | 2025-05-26 04:50:09 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:09.796673 | orchestrator | 2025-05-26 04:50:09 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:09.797413 | orchestrator | 2025-05-26 04:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:12.844772 | orchestrator | 2025-05-26 04:50:12 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:12.845883 | orchestrator | 2025-05-26 04:50:12 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:50:12.847718 | orchestrator | 2025-05-26 04:50:12 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:12.849237 | orchestrator | 2025-05-26 04:50:12 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:12.849279 | orchestrator | 2025-05-26 04:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:15.904786 | orchestrator | 2025-05-26 04:50:15 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:15.906178 | orchestrator | 2025-05-26 04:50:15 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state STARTED 2025-05-26 04:50:15.908196 | orchestrator | 2025-05-26 04:50:15 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:15.909309 | orchestrator | 2025-05-26 04:50:15 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:15.909410 | orchestrator | 2025-05-26 04:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:18.951583 | orchestrator | 2025-05-26 04:50:18 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:18.956787 | orchestrator | 2025-05-26 04:50:18 | INFO  | Task af43fe52-b9da-4b19-984b-54776795c8cd is in state SUCCESS 2025-05-26 04:50:18.956848 | orchestrator | 2025-05-26 04:50:18 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:18.956863 | orchestrator | 2025-05-26 04:50:18 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:18.956876 | orchestrator | 2025-05-26 04:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:22.005754 | orchestrator | 2025-05-26 04:50:22 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:22.006737 | orchestrator | 2025-05-26 04:50:22 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:22.008829 | orchestrator | 2025-05-26 04:50:22 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:22.008871 | orchestrator | 2025-05-26 04:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:25.057905 | orchestrator | 2025-05-26 04:50:25 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:25.058182 | orchestrator | 2025-05-26 04:50:25 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:25.059158 | orchestrator | 2025-05-26 04:50:25 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:25.059192 | orchestrator | 2025-05-26 04:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:28.105472 | orchestrator | 2025-05-26 04:50:28 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:28.106411 | orchestrator | 2025-05-26 04:50:28 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:28.108556 | orchestrator | 2025-05-26 04:50:28 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:28.108582 | orchestrator | 2025-05-26 04:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:31.153367 | orchestrator | 2025-05-26 04:50:31 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:31.153472 | orchestrator | 2025-05-26 04:50:31 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:31.155048 | orchestrator | 2025-05-26 04:50:31 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:31.155095 | orchestrator | 2025-05-26 04:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:34.194184 | orchestrator | 2025-05-26 04:50:34 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:34.194951 | orchestrator | 2025-05-26 04:50:34 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:34.196239 | orchestrator | 2025-05-26 04:50:34 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:34.196374 | orchestrator | 2025-05-26 04:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:37.253175 | orchestrator | 2025-05-26 04:50:37 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:37.254298 | orchestrator | 2025-05-26 04:50:37 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:37.254463 | orchestrator | 2025-05-26 04:50:37 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:37.254581 | orchestrator | 2025-05-26 04:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:40.301141 | orchestrator | 2025-05-26 04:50:40 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:40.302246 | orchestrator | 2025-05-26 04:50:40 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:40.305151 | orchestrator | 2025-05-26 04:50:40 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:40.305200 | orchestrator | 2025-05-26 04:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:43.354832 | orchestrator | 2025-05-26 04:50:43 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:43.356739 | orchestrator | 2025-05-26 04:50:43 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:43.358713 | orchestrator | 2025-05-26 04:50:43 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:43.358769 | orchestrator | 2025-05-26 04:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:46.399666 | orchestrator | 2025-05-26 04:50:46 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:46.401708 | orchestrator | 2025-05-26 04:50:46 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:46.402922 | orchestrator | 2025-05-26 04:50:46 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:46.402956 | orchestrator | 2025-05-26 04:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:49.458138 | orchestrator | 2025-05-26 04:50:49 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:49.458288 | orchestrator | 2025-05-26 04:50:49 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:49.458317 | orchestrator | 2025-05-26 04:50:49 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:49.459617 | orchestrator | 2025-05-26 04:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:52.558735 | orchestrator | 2025-05-26 04:50:52 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:52.559286 | orchestrator | 2025-05-26 04:50:52 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:52.560220 | orchestrator | 2025-05-26 04:50:52 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:52.560326 | orchestrator | 2025-05-26 04:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:55.610247 | orchestrator | 2025-05-26 04:50:55 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:55.610361 | orchestrator | 2025-05-26 04:50:55 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:55.611196 | orchestrator | 2025-05-26 04:50:55 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:55.611332 | orchestrator | 2025-05-26 04:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:50:58.661619 | orchestrator | 2025-05-26 04:50:58 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:50:58.661983 | orchestrator | 2025-05-26 04:50:58 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state STARTED 2025-05-26 04:50:58.662955 | orchestrator | 2025-05-26 04:50:58 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:50:58.663013 | orchestrator | 2025-05-26 04:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:01.736723 | orchestrator | 2025-05-26 04:51:01 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:01.736829 | orchestrator | 2025-05-26 04:51:01 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:01.736845 | orchestrator | 2025-05-26 04:51:01 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:01.736857 | orchestrator | 2025-05-26 04:51:01 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:01.740412 | orchestrator | 2025-05-26 04:51:01 | INFO  | Task 9f8ac9a3-f676-461e-8045-72236e9bd9ed is in state SUCCESS 2025-05-26 04:51:01.742439 | orchestrator | 2025-05-26 04:51:01.742515 | orchestrator | 2025-05-26 04:51:01.742530 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-26 04:51:01.742542 | orchestrator | 2025-05-26 04:51:01.742553 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-26 04:51:01.742564 | orchestrator | Monday 26 May 2025 04:48:51 +0000 (0:00:00.415) 0:00:00.415 ************ 2025-05-26 04:51:01.742576 | orchestrator | ok: [testbed-manager] 2025-05-26 04:51:01.742587 | orchestrator | 2025-05-26 04:51:01.742598 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-26 04:51:01.742610 | orchestrator | Monday 26 May 2025 04:48:52 +0000 (0:00:01.154) 0:00:01.570 ************ 2025-05-26 04:51:01.742621 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-26 04:51:01.742632 | orchestrator | 2025-05-26 04:51:01.742643 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-26 04:51:01.742654 | orchestrator | Monday 26 May 2025 04:48:53 +0000 (0:00:01.118) 0:00:02.688 ************ 2025-05-26 04:51:01.742664 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.742675 | orchestrator | 2025-05-26 04:51:01.742686 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-26 04:51:01.742697 | orchestrator | Monday 26 May 2025 04:48:55 +0000 (0:00:01.614) 0:00:04.302 ************ 2025-05-26 04:51:01.742708 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-26 04:51:01.742718 | orchestrator | ok: [testbed-manager] 2025-05-26 04:51:01.742729 | orchestrator | 2025-05-26 04:51:01.742740 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-26 04:51:01.742751 | orchestrator | Monday 26 May 2025 04:50:14 +0000 (0:01:18.941) 0:01:23.244 ************ 2025-05-26 04:51:01.742761 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.742772 | orchestrator | 2025-05-26 04:51:01.742783 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:51:01.742794 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:51:01.742806 | orchestrator | 2025-05-26 04:51:01.742817 | orchestrator | 2025-05-26 04:51:01.742830 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:51:01.742841 | orchestrator | Monday 26 May 2025 04:50:18 +0000 (0:00:04.001) 0:01:27.246 ************ 2025-05-26 04:51:01.742851 | orchestrator | =============================================================================== 2025-05-26 04:51:01.742862 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 78.94s 2025-05-26 04:51:01.742873 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.00s 2025-05-26 04:51:01.742884 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.61s 2025-05-26 04:51:01.742895 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.15s 2025-05-26 04:51:01.742906 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.12s 2025-05-26 04:51:01.742934 | orchestrator | 2025-05-26 04:51:01.742945 | orchestrator | 2025-05-26 04:51:01.742975 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-26 04:51:01.742986 | orchestrator | 2025-05-26 04:51:01.742997 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-26 04:51:01.743008 | orchestrator | Monday 26 May 2025 04:48:22 +0000 (0:00:00.220) 0:00:00.220 ************ 2025-05-26 04:51:01.743020 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:51:01.743034 | orchestrator | 2025-05-26 04:51:01.743047 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-26 04:51:01.743059 | orchestrator | Monday 26 May 2025 04:48:23 +0000 (0:00:01.161) 0:00:01.382 ************ 2025-05-26 04:51:01.743072 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-26 04:51:01.743085 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-26 04:51:01.743098 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-26 04:51:01.743111 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-26 04:51:01.743124 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-26 04:51:01.743136 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-26 04:51:01.743149 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-26 04:51:01.743162 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-26 04:51:01.743174 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-26 04:51:01.743187 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-26 04:51:01.743210 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-26 04:51:01.743223 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-26 04:51:01.743236 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-26 04:51:01.743250 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-26 04:51:01.743262 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-26 04:51:01.743276 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-26 04:51:01.743301 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-26 04:51:01.743315 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-26 04:51:01.743328 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-26 04:51:01.743341 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-26 04:51:01.743353 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-26 04:51:01.743366 | orchestrator | 2025-05-26 04:51:01.743379 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-26 04:51:01.743390 | orchestrator | Monday 26 May 2025 04:48:28 +0000 (0:00:04.792) 0:00:06.175 ************ 2025-05-26 04:51:01.743401 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:51:01.743413 | orchestrator | 2025-05-26 04:51:01.743424 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-26 04:51:01.743435 | orchestrator | Monday 26 May 2025 04:48:29 +0000 (0:00:01.399) 0:00:07.574 ************ 2025-05-26 04:51:01.743450 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.743510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.743524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.743535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.743547 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.743621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.743646 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.743682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.743829 | orchestrator | 2025-05-26 04:51:01.743841 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-26 04:51:01.743852 | orchestrator | Monday 26 May 2025 04:48:35 +0000 (0:00:05.407) 0:00:12.982 ************ 2025-05-26 04:51:01.743906 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.743919 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.743937 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.743950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.743966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.743977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.743989 | orchestrator | skipping: [testbed-manager] 2025-05-26 04:51:01.744000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744046 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:51:01.744058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744097 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:51:01.744108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744142 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:51:01.744154 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:51:01.744165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744212 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:51:01.744223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744262 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:51:01.744273 | orchestrator | 2025-05-26 04:51:01.744284 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-26 04:51:01.744295 | orchestrator | Monday 26 May 2025 04:48:36 +0000 (0:00:01.116) 0:00:14.098 ************ 2025-05-26 04:51:01.744306 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744340 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744352 | orchestrator | skipping: [testbed-manager] 2025-05-26 04:51:01.744363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744437 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:51:01.744453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.744962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.744991 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:51:01.745001 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:51:01.745011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.745025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.745036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.745046 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:51:01.745056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.745075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.745098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.745109 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:51:01.745119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-26 04:51:01.745129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.745139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.745149 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:51:01.745158 | orchestrator | 2025-05-26 04:51:01.745168 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-26 04:51:01.745178 | orchestrator | Monday 26 May 2025 04:48:39 +0000 (0:00:02.935) 0:00:17.034 ************ 2025-05-26 04:51:01.745192 | orchestrator | skipping: [testbed-manager] 2025-05-26 04:51:01.745202 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:51:01.745211 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:51:01.745221 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:51:01.745230 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:51:01.745240 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:51:01.745250 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:51:01.745259 | orchestrator | 2025-05-26 04:51:01.745269 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-26 04:51:01.745279 | orchestrator | Monday 26 May 2025 04:48:39 +0000 (0:00:00.843) 0:00:17.877 ************ 2025-05-26 04:51:01.745289 | orchestrator | skipping: [testbed-manager] 2025-05-26 04:51:01.745304 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:51:01.745313 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:51:01.745323 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:51:01.745332 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:51:01.745342 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:51:01.745351 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:51:01.745361 | orchestrator | 2025-05-26 04:51:01.745370 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-26 04:51:01.745380 | orchestrator | Monday 26 May 2025 04:48:41 +0000 (0:00:01.043) 0:00:18.921 ************ 2025-05-26 04:51:01.745390 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.745400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.745416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.745427 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.745452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745489 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.745511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.745536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.745571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.745682 | orchestrator | 2025-05-26 04:51:01.745693 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-26 04:51:01.745704 | orchestrator | Monday 26 May 2025 04:48:47 +0000 (0:00:06.366) 0:00:25.287 ************ 2025-05-26 04:51:01.745721 | orchestrator | [WARNING]: Skipped 2025-05-26 04:51:01.745734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-26 04:51:01.745746 | orchestrator | to this access issue: 2025-05-26 04:51:01.745757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-26 04:51:01.745768 | orchestrator | directory 2025-05-26 04:51:01.745780 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-26 04:51:01.745790 | orchestrator | 2025-05-26 04:51:01.745802 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-26 04:51:01.745817 | orchestrator | Monday 26 May 2025 04:48:49 +0000 (0:00:01.789) 0:00:27.077 ************ 2025-05-26 04:51:01.745828 | orchestrator | [WARNING]: Skipped 2025-05-26 04:51:01.745839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-26 04:51:01.745850 | orchestrator | to this access issue: 2025-05-26 04:51:01.745861 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-26 04:51:01.745872 | orchestrator | directory 2025-05-26 04:51:01.745883 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-26 04:51:01.745894 | orchestrator | 2025-05-26 04:51:01.745906 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-26 04:51:01.745918 | orchestrator | Monday 26 May 2025 04:48:50 +0000 (0:00:00.905) 0:00:27.982 ************ 2025-05-26 04:51:01.745929 | orchestrator | [WARNING]: Skipped 2025-05-26 04:51:01.745939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-26 04:51:01.745949 | orchestrator | to this access issue: 2025-05-26 04:51:01.745958 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-26 04:51:01.745968 | orchestrator | directory 2025-05-26 04:51:01.745978 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-26 04:51:01.745987 | orchestrator | 2025-05-26 04:51:01.745997 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-26 04:51:01.746007 | orchestrator | Monday 26 May 2025 04:48:50 +0000 (0:00:00.655) 0:00:28.638 ************ 2025-05-26 04:51:01.746094 | orchestrator | [WARNING]: Skipped 2025-05-26 04:51:01.746109 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-26 04:51:01.746119 | orchestrator | to this access issue: 2025-05-26 04:51:01.746129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-26 04:51:01.746138 | orchestrator | directory 2025-05-26 04:51:01.746148 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-26 04:51:01.746158 | orchestrator | 2025-05-26 04:51:01.746167 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-26 04:51:01.746177 | orchestrator | Monday 26 May 2025 04:48:51 +0000 (0:00:00.854) 0:00:29.493 ************ 2025-05-26 04:51:01.746186 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.746196 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:01.746206 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:01.746215 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:01.746225 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:51:01.746234 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:51:01.746244 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:51:01.746253 | orchestrator | 2025-05-26 04:51:01.746263 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-26 04:51:01.746273 | orchestrator | Monday 26 May 2025 04:48:56 +0000 (0:00:05.099) 0:00:34.593 ************ 2025-05-26 04:51:01.746283 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-26 04:51:01.746293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-26 04:51:01.746302 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-26 04:51:01.746325 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-26 04:51:01.746335 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-26 04:51:01.746344 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-26 04:51:01.746354 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-26 04:51:01.746364 | orchestrator | 2025-05-26 04:51:01.746373 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-26 04:51:01.746383 | orchestrator | Monday 26 May 2025 04:48:58 +0000 (0:00:02.191) 0:00:36.784 ************ 2025-05-26 04:51:01.746393 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.746402 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:01.746412 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:01.746421 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:51:01.746431 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:01.746440 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:51:01.746450 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:51:01.746477 | orchestrator | 2025-05-26 04:51:01.746487 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-26 04:51:01.746497 | orchestrator | Monday 26 May 2025 04:49:01 +0000 (0:00:02.815) 0:00:39.599 ************ 2025-05-26 04:51:01.746507 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.746522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.746533 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.746543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.746553 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.746574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.746585 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.746595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.746628 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.746644 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.746655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.746665 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.746681 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.746697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.746707 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.746717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.746727 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.746737 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.746747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:51:01.746757 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.746774 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.746783 | orchestrator | 2025-05-26 04:51:01.746793 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-26 04:51:01.746808 | orchestrator | Monday 26 May 2025 04:49:03 +0000 (0:00:02.160) 0:00:41.760 ************ 2025-05-26 04:51:01.746818 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-26 04:51:01.746828 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-26 04:51:01.746837 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-26 04:51:01.746855 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-26 04:51:01.746865 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-26 04:51:01.746874 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-26 04:51:01.746884 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-26 04:51:01.746893 | orchestrator | 2025-05-26 04:51:01.746903 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-26 04:51:01.746913 | orchestrator | Monday 26 May 2025 04:49:06 +0000 (0:00:02.310) 0:00:44.070 ************ 2025-05-26 04:51:01.746922 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-26 04:51:01.746932 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-26 04:51:01.746941 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-26 04:51:01.746951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-26 04:51:01.746960 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-26 04:51:01.746970 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-26 04:51:01.746979 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-26 04:51:01.746989 | orchestrator | 2025-05-26 04:51:01.746998 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-26 04:51:01.747008 | orchestrator | Monday 26 May 2025 04:49:08 +0000 (0:00:02.345) 0:00:46.415 ************ 2025-05-26 04:51:01.747022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.747033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.747049 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.747059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.747069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.747084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747104 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.747134 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-26 04:51:01.747189 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:51:01.747273 | orchestrator | 2025-05-26 04:51:01.747288 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-26 04:51:01.747298 | orchestrator | Monday 26 May 2025 04:49:12 +0000 (0:00:03.643) 0:00:50.059 ************ 2025-05-26 04:51:01.747308 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.747317 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:01.747327 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:01.747336 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:01.747346 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:51:01.747355 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:51:01.747365 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:51:01.747374 | orchestrator | 2025-05-26 04:51:01.747384 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-26 04:51:01.747393 | orchestrator | Monday 26 May 2025 04:49:13 +0000 (0:00:01.554) 0:00:51.613 ************ 2025-05-26 04:51:01.747403 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.747412 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:01.747422 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:01.747431 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:01.747440 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:51:01.747450 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:51:01.747509 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:51:01.747520 | orchestrator | 2025-05-26 04:51:01.747530 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-26 04:51:01.747540 | orchestrator | Monday 26 May 2025 04:49:14 +0000 (0:00:01.207) 0:00:52.821 ************ 2025-05-26 04:51:01.747549 | orchestrator | 2025-05-26 04:51:01.747559 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-26 04:51:01.747578 | orchestrator | Monday 26 May 2025 04:49:14 +0000 (0:00:00.076) 0:00:52.897 ************ 2025-05-26 04:51:01.747588 | orchestrator | 2025-05-26 04:51:01.747597 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-26 04:51:01.747607 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:00.073) 0:00:52.971 ************ 2025-05-26 04:51:01.747617 | orchestrator | 2025-05-26 04:51:01.747626 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-26 04:51:01.747634 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:00.254) 0:00:53.225 ************ 2025-05-26 04:51:01.747641 | orchestrator | 2025-05-26 04:51:01.747649 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-26 04:51:01.747657 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:00.069) 0:00:53.295 ************ 2025-05-26 04:51:01.747665 | orchestrator | 2025-05-26 04:51:01.747673 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-26 04:51:01.747680 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:00.088) 0:00:53.383 ************ 2025-05-26 04:51:01.747688 | orchestrator | 2025-05-26 04:51:01.747696 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-26 04:51:01.747708 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:00.108) 0:00:53.492 ************ 2025-05-26 04:51:01.747716 | orchestrator | 2025-05-26 04:51:01.747724 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-26 04:51:01.747732 | orchestrator | Monday 26 May 2025 04:49:15 +0000 (0:00:00.104) 0:00:53.597 ************ 2025-05-26 04:51:01.747739 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:01.747747 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.747755 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:51:01.747763 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:51:01.747771 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:01.747778 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:01.747786 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:51:01.747794 | orchestrator | 2025-05-26 04:51:01.747801 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-26 04:51:01.747809 | orchestrator | Monday 26 May 2025 04:50:08 +0000 (0:00:52.644) 0:01:46.241 ************ 2025-05-26 04:51:01.747817 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:01.747825 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:51:01.747833 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:51:01.747840 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:51:01.747848 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.747856 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:01.747863 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:01.747871 | orchestrator | 2025-05-26 04:51:01.747879 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-26 04:51:01.747887 | orchestrator | Monday 26 May 2025 04:50:47 +0000 (0:00:39.119) 0:02:25.361 ************ 2025-05-26 04:51:01.747894 | orchestrator | ok: [testbed-manager] 2025-05-26 04:51:01.747902 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:51:01.747910 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:51:01.747918 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:51:01.747925 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:51:01.747933 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:51:01.747941 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:51:01.747948 | orchestrator | 2025-05-26 04:51:01.747956 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-26 04:51:01.747964 | orchestrator | Monday 26 May 2025 04:50:49 +0000 (0:00:02.102) 0:02:27.463 ************ 2025-05-26 04:51:01.747972 | orchestrator | changed: [testbed-manager] 2025-05-26 04:51:01.747980 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:01.747988 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:51:01.747995 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:01.748003 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:01.748016 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:51:01.748023 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:51:01.748031 | orchestrator | 2025-05-26 04:51:01.748039 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:51:01.748047 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-26 04:51:01.748056 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-26 04:51:01.748068 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-26 04:51:01.748077 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-26 04:51:01.748085 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-26 04:51:01.748092 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-26 04:51:01.748100 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-26 04:51:01.748108 | orchestrator | 2025-05-26 04:51:01.748116 | orchestrator | 2025-05-26 04:51:01.748124 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:51:01.748132 | orchestrator | Monday 26 May 2025 04:50:58 +0000 (0:00:09.356) 0:02:36.820 ************ 2025-05-26 04:51:01.748140 | orchestrator | =============================================================================== 2025-05-26 04:51:01.748148 | orchestrator | common : Restart fluentd container ------------------------------------- 52.64s 2025-05-26 04:51:01.748156 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.12s 2025-05-26 04:51:01.748164 | orchestrator | common : Restart cron container ----------------------------------------- 9.36s 2025-05-26 04:51:01.748171 | orchestrator | common : Copying over config.json files for services -------------------- 6.37s 2025-05-26 04:51:01.748179 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.41s 2025-05-26 04:51:01.748187 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.10s 2025-05-26 04:51:01.748195 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.79s 2025-05-26 04:51:01.748203 | orchestrator | common : Check common containers ---------------------------------------- 3.64s 2025-05-26 04:51:01.748211 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.94s 2025-05-26 04:51:01.748219 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.82s 2025-05-26 04:51:01.748226 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.35s 2025-05-26 04:51:01.748238 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.31s 2025-05-26 04:51:01.748246 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.19s 2025-05-26 04:51:01.748254 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.16s 2025-05-26 04:51:01.748261 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.10s 2025-05-26 04:51:01.748269 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.79s 2025-05-26 04:51:01.748277 | orchestrator | common : Creating log volume -------------------------------------------- 1.55s 2025-05-26 04:51:01.748285 | orchestrator | common : include_tasks -------------------------------------------------- 1.40s 2025-05-26 04:51:01.748293 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.21s 2025-05-26 04:51:01.748305 | orchestrator | common : include_tasks -------------------------------------------------- 1.16s 2025-05-26 04:51:01.748313 | orchestrator | 2025-05-26 04:51:01 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:01.748321 | orchestrator | 2025-05-26 04:51:01 | INFO  | Task 02923663-fc00-44ed-bd10-c9fadcebfe58 is in state STARTED 2025-05-26 04:51:01.748329 | orchestrator | 2025-05-26 04:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:04.783397 | orchestrator | 2025-05-26 04:51:04 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:04.785388 | orchestrator | 2025-05-26 04:51:04 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:04.785683 | orchestrator | 2025-05-26 04:51:04 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:04.786510 | orchestrator | 2025-05-26 04:51:04 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:04.786957 | orchestrator | 2025-05-26 04:51:04 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:04.787339 | orchestrator | 2025-05-26 04:51:04 | INFO  | Task 02923663-fc00-44ed-bd10-c9fadcebfe58 is in state STARTED 2025-05-26 04:51:04.787362 | orchestrator | 2025-05-26 04:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:07.828301 | orchestrator | 2025-05-26 04:51:07 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:07.830993 | orchestrator | 2025-05-26 04:51:07 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:07.833280 | orchestrator | 2025-05-26 04:51:07 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:07.842058 | orchestrator | 2025-05-26 04:51:07 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:07.842701 | orchestrator | 2025-05-26 04:51:07 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:07.844678 | orchestrator | 2025-05-26 04:51:07 | INFO  | Task 02923663-fc00-44ed-bd10-c9fadcebfe58 is in state STARTED 2025-05-26 04:51:07.844715 | orchestrator | 2025-05-26 04:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:10.872810 | orchestrator | 2025-05-26 04:51:10 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:10.872907 | orchestrator | 2025-05-26 04:51:10 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:10.873278 | orchestrator | 2025-05-26 04:51:10 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:10.873949 | orchestrator | 2025-05-26 04:51:10 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:10.874488 | orchestrator | 2025-05-26 04:51:10 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:10.875382 | orchestrator | 2025-05-26 04:51:10 | INFO  | Task 02923663-fc00-44ed-bd10-c9fadcebfe58 is in state STARTED 2025-05-26 04:51:10.875482 | orchestrator | 2025-05-26 04:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:13.912598 | orchestrator | 2025-05-26 04:51:13 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:13.912689 | orchestrator | 2025-05-26 04:51:13 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:13.913824 | orchestrator | 2025-05-26 04:51:13 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:13.914379 | orchestrator | 2025-05-26 04:51:13 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:13.915156 | orchestrator | 2025-05-26 04:51:13 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:13.915851 | orchestrator | 2025-05-26 04:51:13 | INFO  | Task 02923663-fc00-44ed-bd10-c9fadcebfe58 is in state STARTED 2025-05-26 04:51:13.915885 | orchestrator | 2025-05-26 04:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:16.953889 | orchestrator | 2025-05-26 04:51:16 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:16.954687 | orchestrator | 2025-05-26 04:51:16 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:16.954921 | orchestrator | 2025-05-26 04:51:16 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:16.955863 | orchestrator | 2025-05-26 04:51:16 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:16.956597 | orchestrator | 2025-05-26 04:51:16 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:16.959682 | orchestrator | 2025-05-26 04:51:16 | INFO  | Task 02923663-fc00-44ed-bd10-c9fadcebfe58 is in state STARTED 2025-05-26 04:51:16.959733 | orchestrator | 2025-05-26 04:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:20.027164 | orchestrator | 2025-05-26 04:51:20 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:20.029754 | orchestrator | 2025-05-26 04:51:20 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:20.030800 | orchestrator | 2025-05-26 04:51:20 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:20.032878 | orchestrator | 2025-05-26 04:51:20 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:20.033684 | orchestrator | 2025-05-26 04:51:20 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:20.035159 | orchestrator | 2025-05-26 04:51:20 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:20.035505 | orchestrator | 2025-05-26 04:51:20 | INFO  | Task 02923663-fc00-44ed-bd10-c9fadcebfe58 is in state SUCCESS 2025-05-26 04:51:20.035798 | orchestrator | 2025-05-26 04:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:23.077887 | orchestrator | 2025-05-26 04:51:23 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:23.079368 | orchestrator | 2025-05-26 04:51:23 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:23.081264 | orchestrator | 2025-05-26 04:51:23 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:23.083399 | orchestrator | 2025-05-26 04:51:23 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:23.086129 | orchestrator | 2025-05-26 04:51:23 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:23.091412 | orchestrator | 2025-05-26 04:51:23 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:23.091510 | orchestrator | 2025-05-26 04:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:26.135653 | orchestrator | 2025-05-26 04:51:26 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:26.135797 | orchestrator | 2025-05-26 04:51:26 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:26.135882 | orchestrator | 2025-05-26 04:51:26 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:26.136842 | orchestrator | 2025-05-26 04:51:26 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:26.138617 | orchestrator | 2025-05-26 04:51:26 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:26.139523 | orchestrator | 2025-05-26 04:51:26 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:26.139564 | orchestrator | 2025-05-26 04:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:29.195712 | orchestrator | 2025-05-26 04:51:29 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:29.196772 | orchestrator | 2025-05-26 04:51:29 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:29.198757 | orchestrator | 2025-05-26 04:51:29 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:29.204624 | orchestrator | 2025-05-26 04:51:29 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:29.204731 | orchestrator | 2025-05-26 04:51:29 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:29.206553 | orchestrator | 2025-05-26 04:51:29 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:29.206611 | orchestrator | 2025-05-26 04:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:32.249066 | orchestrator | 2025-05-26 04:51:32 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:32.249252 | orchestrator | 2025-05-26 04:51:32 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:32.249343 | orchestrator | 2025-05-26 04:51:32 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state STARTED 2025-05-26 04:51:32.250492 | orchestrator | 2025-05-26 04:51:32 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:32.252638 | orchestrator | 2025-05-26 04:51:32 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:32.254884 | orchestrator | 2025-05-26 04:51:32 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:32.254978 | orchestrator | 2025-05-26 04:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:35.290358 | orchestrator | 2025-05-26 04:51:35 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:35.290990 | orchestrator | 2025-05-26 04:51:35 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:35.296520 | orchestrator | 2025-05-26 04:51:35.296614 | orchestrator | 2025-05-26 04:51:35.296639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 04:51:35.296660 | orchestrator | 2025-05-26 04:51:35.296680 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-26 04:51:35.296700 | orchestrator | Monday 26 May 2025 04:51:06 +0000 (0:00:00.500) 0:00:00.500 ************ 2025-05-26 04:51:35.296721 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:51:35.296741 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:51:35.296761 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:51:35.296779 | orchestrator | 2025-05-26 04:51:35.296798 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-26 04:51:35.296817 | orchestrator | Monday 26 May 2025 04:51:07 +0000 (0:00:00.490) 0:00:00.990 ************ 2025-05-26 04:51:35.296837 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-26 04:51:35.296856 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-26 04:51:35.296876 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-26 04:51:35.296922 | orchestrator | 2025-05-26 04:51:35.296942 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-26 04:51:35.296961 | orchestrator | 2025-05-26 04:51:35.296979 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-26 04:51:35.296998 | orchestrator | Monday 26 May 2025 04:51:08 +0000 (0:00:00.963) 0:00:01.954 ************ 2025-05-26 04:51:35.297017 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:51:35.297036 | orchestrator | 2025-05-26 04:51:35.297055 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-26 04:51:35.297072 | orchestrator | Monday 26 May 2025 04:51:09 +0000 (0:00:00.910) 0:00:02.865 ************ 2025-05-26 04:51:35.297090 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-26 04:51:35.297109 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-26 04:51:35.297128 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-26 04:51:35.297147 | orchestrator | 2025-05-26 04:51:35.297166 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-26 04:51:35.297183 | orchestrator | Monday 26 May 2025 04:51:09 +0000 (0:00:00.726) 0:00:03.591 ************ 2025-05-26 04:51:35.297202 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-26 04:51:35.297221 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-26 04:51:35.297240 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-26 04:51:35.297259 | orchestrator | 2025-05-26 04:51:35.297279 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-26 04:51:35.297297 | orchestrator | Monday 26 May 2025 04:51:12 +0000 (0:00:02.598) 0:00:06.189 ************ 2025-05-26 04:51:35.297313 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:35.297324 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:35.297334 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:35.297345 | orchestrator | 2025-05-26 04:51:35.297356 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-26 04:51:35.297366 | orchestrator | Monday 26 May 2025 04:51:15 +0000 (0:00:02.593) 0:00:08.783 ************ 2025-05-26 04:51:35.297377 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:35.297387 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:35.297398 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:35.297408 | orchestrator | 2025-05-26 04:51:35.297419 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:51:35.297430 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:51:35.297487 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:51:35.297512 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:51:35.297523 | orchestrator | 2025-05-26 04:51:35.297534 | orchestrator | 2025-05-26 04:51:35.297545 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:51:35.297555 | orchestrator | Monday 26 May 2025 04:51:17 +0000 (0:00:02.579) 0:00:11.362 ************ 2025-05-26 04:51:35.297566 | orchestrator | =============================================================================== 2025-05-26 04:51:35.297576 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.60s 2025-05-26 04:51:35.297587 | orchestrator | memcached : Check memcached container ----------------------------------- 2.59s 2025-05-26 04:51:35.297597 | orchestrator | memcached : Restart memcached container --------------------------------- 2.58s 2025-05-26 04:51:35.297609 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2025-05-26 04:51:35.297619 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.91s 2025-05-26 04:51:35.297639 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.73s 2025-05-26 04:51:35.297650 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-05-26 04:51:35.297661 | orchestrator | 2025-05-26 04:51:35.297671 | orchestrator | 2025-05-26 04:51:35.297682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 04:51:35.297693 | orchestrator | 2025-05-26 04:51:35.297711 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-26 04:51:35.297730 | orchestrator | Monday 26 May 2025 04:51:06 +0000 (0:00:00.526) 0:00:00.526 ************ 2025-05-26 04:51:35.297748 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:51:35.297768 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:51:35.297786 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:51:35.297806 | orchestrator | 2025-05-26 04:51:35.297824 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-26 04:51:35.297862 | orchestrator | Monday 26 May 2025 04:51:07 +0000 (0:00:00.529) 0:00:01.056 ************ 2025-05-26 04:51:35.297879 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-26 04:51:35.297894 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-26 04:51:35.297911 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-26 04:51:35.297927 | orchestrator | 2025-05-26 04:51:35.297944 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-26 04:51:35.297959 | orchestrator | 2025-05-26 04:51:35.297975 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-26 04:51:35.297991 | orchestrator | Monday 26 May 2025 04:51:07 +0000 (0:00:00.504) 0:00:01.560 ************ 2025-05-26 04:51:35.298008 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:51:35.298133 | orchestrator | 2025-05-26 04:51:35.298152 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-26 04:51:35.298170 | orchestrator | Monday 26 May 2025 04:51:08 +0000 (0:00:00.879) 0:00:02.440 ************ 2025-05-26 04:51:35.298191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298350 | orchestrator | 2025-05-26 04:51:35.298367 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-26 04:51:35.298383 | orchestrator | Monday 26 May 2025 04:51:10 +0000 (0:00:01.704) 0:00:04.144 ************ 2025-05-26 04:51:35.298400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298605 | orchestrator | 2025-05-26 04:51:35.298619 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-26 04:51:35.298636 | orchestrator | Monday 26 May 2025 04:51:13 +0000 (0:00:03.573) 0:00:07.717 ************ 2025-05-26 04:51:35.298656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298781 | orchestrator | 2025-05-26 04:51:35.298807 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-26 04:51:35.298824 | orchestrator | Monday 26 May 2025 04:51:16 +0000 (0:00:02.915) 0:00:10.633 ************ 2025-05-26 04:51:35.298842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-26 04:51:35.298972 | orchestrator | 2025-05-26 04:51:35.298989 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-26 04:51:35.299005 | orchestrator | Monday 26 May 2025 04:51:18 +0000 (0:00:01.853) 0:00:12.487 ************ 2025-05-26 04:51:35.299022 | orchestrator | 2025-05-26 04:51:35.299039 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-26 04:51:35.299063 | orchestrator | Monday 26 May 2025 04:51:18 +0000 (0:00:00.089) 0:00:12.576 ************ 2025-05-26 04:51:35.299079 | orchestrator | 2025-05-26 04:51:35.299095 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-26 04:51:35.299112 | orchestrator | Monday 26 May 2025 04:51:18 +0000 (0:00:00.181) 0:00:12.758 ************ 2025-05-26 04:51:35.299127 | orchestrator | 2025-05-26 04:51:35.299143 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-26 04:51:35.299159 | orchestrator | Monday 26 May 2025 04:51:19 +0000 (0:00:00.207) 0:00:12.966 ************ 2025-05-26 04:51:35.299175 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:35.299191 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:35.299207 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:35.299223 | orchestrator | 2025-05-26 04:51:35.299239 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-26 04:51:35.299255 | orchestrator | Monday 26 May 2025 04:51:24 +0000 (0:00:05.755) 0:00:18.721 ************ 2025-05-26 04:51:35.299271 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:51:35.299287 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:51:35.299303 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:51:35.299319 | orchestrator | 2025-05-26 04:51:35.299335 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:51:35.299353 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:51:35.299369 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:51:35.299396 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:51:35.299412 | orchestrator | 2025-05-26 04:51:35.299427 | orchestrator | 2025-05-26 04:51:35.299473 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:51:35.299490 | orchestrator | Monday 26 May 2025 04:51:34 +0000 (0:00:09.676) 0:00:28.398 ************ 2025-05-26 04:51:35.299506 | orchestrator | =============================================================================== 2025-05-26 04:51:35.299523 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.68s 2025-05-26 04:51:35.299539 | orchestrator | redis : Restart redis container ----------------------------------------- 5.76s 2025-05-26 04:51:35.299555 | orchestrator | redis : Copying over default config.json files -------------------------- 3.57s 2025-05-26 04:51:35.299571 | orchestrator | redis : Copying over redis config files --------------------------------- 2.92s 2025-05-26 04:51:35.299588 | orchestrator | redis : Check redis containers ------------------------------------------ 1.85s 2025-05-26 04:51:35.299603 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.70s 2025-05-26 04:51:35.299620 | orchestrator | redis : include_tasks --------------------------------------------------- 0.88s 2025-05-26 04:51:35.299637 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-05-26 04:51:35.299654 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-05-26 04:51:35.299675 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.48s 2025-05-26 04:51:35.299692 | orchestrator | 2025-05-26 04:51:35 | INFO  | Task d91c59bb-aea6-42f6-b0ae-cef7294c8800 is in state SUCCESS 2025-05-26 04:51:35.299720 | orchestrator | 2025-05-26 04:51:35 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:35.299749 | orchestrator | 2025-05-26 04:51:35 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:35.299792 | orchestrator | 2025-05-26 04:51:35 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:35.299817 | orchestrator | 2025-05-26 04:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:38.333897 | orchestrator | 2025-05-26 04:51:38 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:38.335353 | orchestrator | 2025-05-26 04:51:38 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:38.335382 | orchestrator | 2025-05-26 04:51:38 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:38.335652 | orchestrator | 2025-05-26 04:51:38 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:38.338388 | orchestrator | 2025-05-26 04:51:38 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:38.338435 | orchestrator | 2025-05-26 04:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:41.378292 | orchestrator | 2025-05-26 04:51:41 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:41.385241 | orchestrator | 2025-05-26 04:51:41 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:41.385311 | orchestrator | 2025-05-26 04:51:41 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:41.385325 | orchestrator | 2025-05-26 04:51:41 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:41.386184 | orchestrator | 2025-05-26 04:51:41 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:41.386256 | orchestrator | 2025-05-26 04:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:44.437074 | orchestrator | 2025-05-26 04:51:44 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:44.437190 | orchestrator | 2025-05-26 04:51:44 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:44.437806 | orchestrator | 2025-05-26 04:51:44 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:44.438250 | orchestrator | 2025-05-26 04:51:44 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:44.439071 | orchestrator | 2025-05-26 04:51:44 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:44.439089 | orchestrator | 2025-05-26 04:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:47.493079 | orchestrator | 2025-05-26 04:51:47 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:47.493256 | orchestrator | 2025-05-26 04:51:47 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:47.493275 | orchestrator | 2025-05-26 04:51:47 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:47.493287 | orchestrator | 2025-05-26 04:51:47 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:47.493299 | orchestrator | 2025-05-26 04:51:47 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:47.493310 | orchestrator | 2025-05-26 04:51:47 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:50.525570 | orchestrator | 2025-05-26 04:51:50 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:50.527107 | orchestrator | 2025-05-26 04:51:50 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:50.528837 | orchestrator | 2025-05-26 04:51:50 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:50.528916 | orchestrator | 2025-05-26 04:51:50 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:50.529902 | orchestrator | 2025-05-26 04:51:50 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:50.529931 | orchestrator | 2025-05-26 04:51:50 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:53.696096 | orchestrator | 2025-05-26 04:51:53 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:53.696216 | orchestrator | 2025-05-26 04:51:53 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:53.696234 | orchestrator | 2025-05-26 04:51:53 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:53.696267 | orchestrator | 2025-05-26 04:51:53 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:53.696730 | orchestrator | 2025-05-26 04:51:53 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:53.697049 | orchestrator | 2025-05-26 04:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:56.746869 | orchestrator | 2025-05-26 04:51:56 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:56.747757 | orchestrator | 2025-05-26 04:51:56 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:56.750787 | orchestrator | 2025-05-26 04:51:56 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:56.752813 | orchestrator | 2025-05-26 04:51:56 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:56.754294 | orchestrator | 2025-05-26 04:51:56 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:56.754805 | orchestrator | 2025-05-26 04:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:51:59.795847 | orchestrator | 2025-05-26 04:51:59 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:51:59.797140 | orchestrator | 2025-05-26 04:51:59 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:51:59.799525 | orchestrator | 2025-05-26 04:51:59 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:51:59.803183 | orchestrator | 2025-05-26 04:51:59 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:51:59.803224 | orchestrator | 2025-05-26 04:51:59 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:51:59.803292 | orchestrator | 2025-05-26 04:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:02.842957 | orchestrator | 2025-05-26 04:52:02 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:02.843057 | orchestrator | 2025-05-26 04:52:02 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:02.843081 | orchestrator | 2025-05-26 04:52:02 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:02.843093 | orchestrator | 2025-05-26 04:52:02 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:52:02.843673 | orchestrator | 2025-05-26 04:52:02 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:02.843696 | orchestrator | 2025-05-26 04:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:05.889182 | orchestrator | 2025-05-26 04:52:05 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:05.889507 | orchestrator | 2025-05-26 04:52:05 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:05.892222 | orchestrator | 2025-05-26 04:52:05 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:05.894585 | orchestrator | 2025-05-26 04:52:05 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:52:05.896182 | orchestrator | 2025-05-26 04:52:05 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:05.896373 | orchestrator | 2025-05-26 04:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:08.940801 | orchestrator | 2025-05-26 04:52:08 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:08.944169 | orchestrator | 2025-05-26 04:52:08 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:08.944213 | orchestrator | 2025-05-26 04:52:08 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:08.944379 | orchestrator | 2025-05-26 04:52:08 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state STARTED 2025-05-26 04:52:08.945330 | orchestrator | 2025-05-26 04:52:08 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:08.945359 | orchestrator | 2025-05-26 04:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:11.996011 | orchestrator | 2025-05-26 04:52:11 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:11.997507 | orchestrator | 2025-05-26 04:52:11 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:11.999162 | orchestrator | 2025-05-26 04:52:11 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:12.001609 | orchestrator | 2025-05-26 04:52:12 | INFO  | Task ba9cb9e9-a8c1-4b29-bb3a-e0353c2a2aa7 is in state SUCCESS 2025-05-26 04:52:12.004213 | orchestrator | 2025-05-26 04:52:12.004254 | orchestrator | 2025-05-26 04:52:12.004267 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 04:52:12.004280 | orchestrator | 2025-05-26 04:52:12.004291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-26 04:52:12.004303 | orchestrator | Monday 26 May 2025 04:51:06 +0000 (0:00:00.223) 0:00:00.223 ************ 2025-05-26 04:52:12.004314 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:12.004326 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:12.004336 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:12.004347 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:12.004357 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:12.004368 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:12.004378 | orchestrator | 2025-05-26 04:52:12.004389 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-26 04:52:12.004400 | orchestrator | Monday 26 May 2025 04:51:06 +0000 (0:00:00.729) 0:00:00.952 ************ 2025-05-26 04:52:12.004411 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-26 04:52:12.004422 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-26 04:52:12.004432 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-26 04:52:12.004518 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-26 04:52:12.004529 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-26 04:52:12.004540 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-26 04:52:12.004550 | orchestrator | 2025-05-26 04:52:12.004561 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-26 04:52:12.004572 | orchestrator | 2025-05-26 04:52:12.004583 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-26 04:52:12.004594 | orchestrator | Monday 26 May 2025 04:51:07 +0000 (0:00:01.059) 0:00:02.012 ************ 2025-05-26 04:52:12.004605 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:52:12.004617 | orchestrator | 2025-05-26 04:52:12.004628 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-26 04:52:12.004639 | orchestrator | Monday 26 May 2025 04:51:09 +0000 (0:00:01.791) 0:00:03.804 ************ 2025-05-26 04:52:12.004649 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-26 04:52:12.004660 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-26 04:52:12.004671 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-26 04:52:12.004682 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-26 04:52:12.004692 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-26 04:52:12.004703 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-26 04:52:12.004713 | orchestrator | 2025-05-26 04:52:12.004724 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-26 04:52:12.004735 | orchestrator | Monday 26 May 2025 04:51:12 +0000 (0:00:02.363) 0:00:06.167 ************ 2025-05-26 04:52:12.004746 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-26 04:52:12.004757 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-26 04:52:12.004767 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-26 04:52:12.004778 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-26 04:52:12.004789 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-26 04:52:12.004820 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-26 04:52:12.004834 | orchestrator | 2025-05-26 04:52:12.004846 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-26 04:52:12.004860 | orchestrator | Monday 26 May 2025 04:51:14 +0000 (0:00:02.360) 0:00:08.527 ************ 2025-05-26 04:52:12.004872 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-26 04:52:12.004885 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:12.004898 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-26 04:52:12.004911 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:12.004923 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-26 04:52:12.004935 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:12.004949 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-26 04:52:12.004961 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:12.004973 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-26 04:52:12.004986 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:12.004999 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-26 04:52:12.005011 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:12.005023 | orchestrator | 2025-05-26 04:52:12.005036 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-26 04:52:12.005049 | orchestrator | Monday 26 May 2025 04:51:16 +0000 (0:00:01.755) 0:00:10.283 ************ 2025-05-26 04:52:12.005061 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:12.005074 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:12.005086 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:12.005099 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:12.005111 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:12.005124 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:12.005137 | orchestrator | 2025-05-26 04:52:12.005149 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-26 04:52:12.005162 | orchestrator | Monday 26 May 2025 04:51:16 +0000 (0:00:00.696) 0:00:10.979 ************ 2025-05-26 04:52:12.005204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005387 | orchestrator | 2025-05-26 04:52:12.005399 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-26 04:52:12.005410 | orchestrator | Monday 26 May 2025 04:51:18 +0000 (0:00:01.963) 0:00:12.942 ************ 2025-05-26 04:52:12.005422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005625 | orchestrator | 2025-05-26 04:52:12.005636 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-26 04:52:12.005647 | orchestrator | Monday 26 May 2025 04:51:23 +0000 (0:00:04.940) 0:00:17.882 ************ 2025-05-26 04:52:12.005658 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:12.005669 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:12.005680 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:12.005697 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:12.005708 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:12.005718 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:12.005729 | orchestrator | 2025-05-26 04:52:12.005740 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-26 04:52:12.005751 | orchestrator | Monday 26 May 2025 04:51:25 +0000 (0:00:01.787) 0:00:19.670 ************ 2025-05-26 04:52:12.005762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-26 04:52:12.005993 | orchestrator | 2025-05-26 04:52:12.006004 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-26 04:52:12.006063 | orchestrator | Monday 26 May 2025 04:51:28 +0000 (0:00:03.206) 0:00:22.879 ************ 2025-05-26 04:52:12.006077 | orchestrator | 2025-05-26 04:52:12.006088 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-26 04:52:12.006132 | orchestrator | Monday 26 May 2025 04:51:28 +0000 (0:00:00.112) 0:00:22.991 ************ 2025-05-26 04:52:12.006144 | orchestrator | 2025-05-26 04:52:12.006155 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-26 04:52:12.006166 | orchestrator | Monday 26 May 2025 04:51:28 +0000 (0:00:00.105) 0:00:23.097 ************ 2025-05-26 04:52:12.006176 | orchestrator | 2025-05-26 04:52:12.006187 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-26 04:52:12.006198 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:00.131) 0:00:23.228 ************ 2025-05-26 04:52:12.006209 | orchestrator | 2025-05-26 04:52:12.006220 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-26 04:52:12.006230 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:00.109) 0:00:23.338 ************ 2025-05-26 04:52:12.006241 | orchestrator | 2025-05-26 04:52:12.006252 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-26 04:52:12.006262 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:00.106) 0:00:23.445 ************ 2025-05-26 04:52:12.006273 | orchestrator | 2025-05-26 04:52:12.006284 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-26 04:52:12.006295 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:00.209) 0:00:23.654 ************ 2025-05-26 04:52:12.006306 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:12.006317 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:12.006328 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:12.006338 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:12.006349 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:12.006359 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:12.006370 | orchestrator | 2025-05-26 04:52:12.006381 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-26 04:52:12.006392 | orchestrator | Monday 26 May 2025 04:51:41 +0000 (0:00:11.874) 0:00:35.528 ************ 2025-05-26 04:52:12.006402 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:12.006413 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:12.006424 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:12.006455 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:12.006467 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:12.006477 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:12.006488 | orchestrator | 2025-05-26 04:52:12.006499 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-26 04:52:12.006510 | orchestrator | Monday 26 May 2025 04:51:43 +0000 (0:00:01.669) 0:00:37.198 ************ 2025-05-26 04:52:12.006520 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:12.006541 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:12.006551 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:12.006562 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:12.006572 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:12.006583 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:12.006594 | orchestrator | 2025-05-26 04:52:12.006604 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-26 04:52:12.006615 | orchestrator | Monday 26 May 2025 04:51:47 +0000 (0:00:04.803) 0:00:42.001 ************ 2025-05-26 04:52:12.006626 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-26 04:52:12.006637 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-26 04:52:12.006648 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-26 04:52:12.006659 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-26 04:52:12.006675 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-26 04:52:12.006693 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-26 04:52:12.006705 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-26 04:52:12.006716 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-26 04:52:12.006726 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-26 04:52:12.006737 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-26 04:52:12.006748 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-26 04:52:12.006758 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-26 04:52:12.006769 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-26 04:52:12.006780 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-26 04:52:12.006790 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-26 04:52:12.006801 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-26 04:52:12.006811 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-26 04:52:12.006822 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-26 04:52:12.006833 | orchestrator | 2025-05-26 04:52:12.006844 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-26 04:52:12.006855 | orchestrator | Monday 26 May 2025 04:51:55 +0000 (0:00:07.543) 0:00:49.545 ************ 2025-05-26 04:52:12.006865 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-26 04:52:12.006876 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:12.006887 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-26 04:52:12.006898 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:12.006908 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-26 04:52:12.006919 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:12.006930 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-26 04:52:12.006941 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-26 04:52:12.006962 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-26 04:52:12.006973 | orchestrator | 2025-05-26 04:52:12.006983 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-26 04:52:12.006994 | orchestrator | Monday 26 May 2025 04:51:57 +0000 (0:00:02.511) 0:00:52.056 ************ 2025-05-26 04:52:12.007005 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-26 04:52:12.007016 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:12.007026 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-26 04:52:12.007037 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:12.007048 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-26 04:52:12.007059 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:12.007069 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-26 04:52:12.007080 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-26 04:52:12.007090 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-26 04:52:12.007101 | orchestrator | 2025-05-26 04:52:12.007112 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-26 04:52:12.007123 | orchestrator | Monday 26 May 2025 04:52:01 +0000 (0:00:03.764) 0:00:55.821 ************ 2025-05-26 04:52:12.007133 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:12.007144 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:12.007155 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:12.007165 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:12.007176 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:12.007186 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:12.007197 | orchestrator | 2025-05-26 04:52:12.007208 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:52:12.007219 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-26 04:52:12.007230 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-26 04:52:12.007241 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-26 04:52:12.007252 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 04:52:12.007268 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 04:52:12.007285 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-26 04:52:12.007296 | orchestrator | 2025-05-26 04:52:12.007307 | orchestrator | 2025-05-26 04:52:12.007318 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:52:12.007329 | orchestrator | Monday 26 May 2025 04:52:09 +0000 (0:00:08.219) 0:01:04.040 ************ 2025-05-26 04:52:12.007340 | orchestrator | =============================================================================== 2025-05-26 04:52:12.007350 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.02s 2025-05-26 04:52:12.007361 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.88s 2025-05-26 04:52:12.007372 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.54s 2025-05-26 04:52:12.007382 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.94s 2025-05-26 04:52:12.007393 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.76s 2025-05-26 04:52:12.007403 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.21s 2025-05-26 04:52:12.007421 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.51s 2025-05-26 04:52:12.007432 | orchestrator | module-load : Load modules ---------------------------------------------- 2.36s 2025-05-26 04:52:12.007503 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.36s 2025-05-26 04:52:12.007514 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.96s 2025-05-26 04:52:12.007525 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.79s 2025-05-26 04:52:12.007535 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.79s 2025-05-26 04:52:12.007546 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.76s 2025-05-26 04:52:12.007557 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.67s 2025-05-26 04:52:12.007567 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2025-05-26 04:52:12.007582 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.78s 2025-05-26 04:52:12.007599 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.73s 2025-05-26 04:52:12.007617 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.70s 2025-05-26 04:52:12.007636 | orchestrator | 2025-05-26 04:52:12 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:12.007653 | orchestrator | 2025-05-26 04:52:12 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:12.007672 | orchestrator | 2025-05-26 04:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:15.050201 | orchestrator | 2025-05-26 04:52:15 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:15.050904 | orchestrator | 2025-05-26 04:52:15 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:15.052183 | orchestrator | 2025-05-26 04:52:15 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:15.053071 | orchestrator | 2025-05-26 04:52:15 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:15.054361 | orchestrator | 2025-05-26 04:52:15 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:15.054392 | orchestrator | 2025-05-26 04:52:15 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:18.105188 | orchestrator | 2025-05-26 04:52:18 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:18.106696 | orchestrator | 2025-05-26 04:52:18 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:18.109856 | orchestrator | 2025-05-26 04:52:18 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:18.112607 | orchestrator | 2025-05-26 04:52:18 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:18.114748 | orchestrator | 2025-05-26 04:52:18 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:18.115089 | orchestrator | 2025-05-26 04:52:18 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:21.163124 | orchestrator | 2025-05-26 04:52:21 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:21.170497 | orchestrator | 2025-05-26 04:52:21 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:21.170527 | orchestrator | 2025-05-26 04:52:21 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:21.170549 | orchestrator | 2025-05-26 04:52:21 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:21.171308 | orchestrator | 2025-05-26 04:52:21 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:21.171318 | orchestrator | 2025-05-26 04:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:24.218084 | orchestrator | 2025-05-26 04:52:24 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:24.218196 | orchestrator | 2025-05-26 04:52:24 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:24.219377 | orchestrator | 2025-05-26 04:52:24 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:24.223509 | orchestrator | 2025-05-26 04:52:24 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:24.223561 | orchestrator | 2025-05-26 04:52:24 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:24.223573 | orchestrator | 2025-05-26 04:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:27.281848 | orchestrator | 2025-05-26 04:52:27 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:27.281959 | orchestrator | 2025-05-26 04:52:27 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:27.282303 | orchestrator | 2025-05-26 04:52:27 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:27.282418 | orchestrator | 2025-05-26 04:52:27 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:27.283108 | orchestrator | 2025-05-26 04:52:27 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:27.283156 | orchestrator | 2025-05-26 04:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:30.326143 | orchestrator | 2025-05-26 04:52:30 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:30.326244 | orchestrator | 2025-05-26 04:52:30 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:30.326261 | orchestrator | 2025-05-26 04:52:30 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:30.326570 | orchestrator | 2025-05-26 04:52:30 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:30.327880 | orchestrator | 2025-05-26 04:52:30 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:30.327919 | orchestrator | 2025-05-26 04:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:33.394327 | orchestrator | 2025-05-26 04:52:33 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:33.395596 | orchestrator | 2025-05-26 04:52:33 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:33.397728 | orchestrator | 2025-05-26 04:52:33 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:33.398538 | orchestrator | 2025-05-26 04:52:33 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:33.399964 | orchestrator | 2025-05-26 04:52:33 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:33.399990 | orchestrator | 2025-05-26 04:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:36.457510 | orchestrator | 2025-05-26 04:52:36 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:36.457628 | orchestrator | 2025-05-26 04:52:36 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:36.458135 | orchestrator | 2025-05-26 04:52:36 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:36.460071 | orchestrator | 2025-05-26 04:52:36 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:36.460618 | orchestrator | 2025-05-26 04:52:36 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:36.460644 | orchestrator | 2025-05-26 04:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:39.512613 | orchestrator | 2025-05-26 04:52:39 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:39.514383 | orchestrator | 2025-05-26 04:52:39 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:39.516869 | orchestrator | 2025-05-26 04:52:39 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:39.518752 | orchestrator | 2025-05-26 04:52:39 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:39.524689 | orchestrator | 2025-05-26 04:52:39 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:39.524761 | orchestrator | 2025-05-26 04:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:42.578717 | orchestrator | 2025-05-26 04:52:42 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:42.578941 | orchestrator | 2025-05-26 04:52:42 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:42.580183 | orchestrator | 2025-05-26 04:52:42 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:42.581572 | orchestrator | 2025-05-26 04:52:42 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:42.583332 | orchestrator | 2025-05-26 04:52:42 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:42.583379 | orchestrator | 2025-05-26 04:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:45.632300 | orchestrator | 2025-05-26 04:52:45 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:45.632421 | orchestrator | 2025-05-26 04:52:45 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:45.632501 | orchestrator | 2025-05-26 04:52:45 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:45.633060 | orchestrator | 2025-05-26 04:52:45 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:45.633606 | orchestrator | 2025-05-26 04:52:45 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:45.633642 | orchestrator | 2025-05-26 04:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:48.660810 | orchestrator | 2025-05-26 04:52:48 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:48.660942 | orchestrator | 2025-05-26 04:52:48 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:48.661286 | orchestrator | 2025-05-26 04:52:48 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:48.666433 | orchestrator | 2025-05-26 04:52:48 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:48.666520 | orchestrator | 2025-05-26 04:52:48 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:48.666542 | orchestrator | 2025-05-26 04:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:51.710259 | orchestrator | 2025-05-26 04:52:51 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:51.710395 | orchestrator | 2025-05-26 04:52:51 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:51.710953 | orchestrator | 2025-05-26 04:52:51 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:51.711241 | orchestrator | 2025-05-26 04:52:51 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state STARTED 2025-05-26 04:52:51.712081 | orchestrator | 2025-05-26 04:52:51 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:51.712128 | orchestrator | 2025-05-26 04:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:54.745317 | orchestrator | 2025-05-26 04:52:54 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:54.749558 | orchestrator | 2025-05-26 04:52:54 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:54.749706 | orchestrator | 2025-05-26 04:52:54 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:54.753058 | orchestrator | 2025-05-26 04:52:54 | INFO  | Task 8892126e-334c-4884-a689-b9ead4ba6db7 is in state SUCCESS 2025-05-26 04:52:54.754503 | orchestrator | 2025-05-26 04:52:54.754590 | orchestrator | 2025-05-26 04:52:54.754610 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-26 04:52:54.754621 | orchestrator | 2025-05-26 04:52:54.754629 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-26 04:52:54.754638 | orchestrator | Monday 26 May 2025 04:48:22 +0000 (0:00:00.192) 0:00:00.192 ************ 2025-05-26 04:52:54.754646 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:54.754655 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:54.754663 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:54.754671 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.754679 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.754687 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.754695 | orchestrator | 2025-05-26 04:52:54.754717 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-26 04:52:54.754726 | orchestrator | Monday 26 May 2025 04:48:23 +0000 (0:00:00.742) 0:00:00.934 ************ 2025-05-26 04:52:54.754734 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.754742 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.754750 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.754758 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.754766 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.754773 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.754781 | orchestrator | 2025-05-26 04:52:54.754789 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-26 04:52:54.754797 | orchestrator | Monday 26 May 2025 04:48:24 +0000 (0:00:00.661) 0:00:01.596 ************ 2025-05-26 04:52:54.754805 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.754813 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.754820 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.754828 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.754836 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.754843 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.754851 | orchestrator | 2025-05-26 04:52:54.754859 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-26 04:52:54.754867 | orchestrator | Monday 26 May 2025 04:48:25 +0000 (0:00:00.925) 0:00:02.522 ************ 2025-05-26 04:52:54.754875 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:54.754882 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:54.754890 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:54.754898 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.754906 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.754913 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.754943 | orchestrator | 2025-05-26 04:52:54.754952 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-26 04:52:54.754959 | orchestrator | Monday 26 May 2025 04:48:27 +0000 (0:00:02.048) 0:00:04.570 ************ 2025-05-26 04:52:54.754967 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:54.754975 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:54.754982 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:54.754990 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.754998 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.755005 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.755013 | orchestrator | 2025-05-26 04:52:54.755021 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-26 04:52:54.755070 | orchestrator | Monday 26 May 2025 04:48:28 +0000 (0:00:01.070) 0:00:05.640 ************ 2025-05-26 04:52:54.755080 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:54.755106 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:54.755115 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:54.755124 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.755133 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.755142 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.755150 | orchestrator | 2025-05-26 04:52:54.755159 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-26 04:52:54.755168 | orchestrator | Monday 26 May 2025 04:48:29 +0000 (0:00:00.985) 0:00:06.626 ************ 2025-05-26 04:52:54.755178 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.755187 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.755195 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.755204 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.755213 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.755222 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.755231 | orchestrator | 2025-05-26 04:52:54.755240 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-26 04:52:54.755249 | orchestrator | Monday 26 May 2025 04:48:29 +0000 (0:00:00.665) 0:00:07.291 ************ 2025-05-26 04:52:54.755257 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.755266 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.755275 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.755284 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.755293 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.755302 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.755311 | orchestrator | 2025-05-26 04:52:54.755320 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-26 04:52:54.755329 | orchestrator | Monday 26 May 2025 04:48:30 +0000 (0:00:00.496) 0:00:07.788 ************ 2025-05-26 04:52:54.755338 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-26 04:52:54.755347 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-26 04:52:54.755355 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.755365 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-26 04:52:54.755374 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-26 04:52:54.755382 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.755390 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-26 04:52:54.755398 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-26 04:52:54.755406 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.755414 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-26 04:52:54.755435 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-26 04:52:54.755491 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.755499 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-26 04:52:54.755515 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-26 04:52:54.755523 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.755531 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-26 04:52:54.755539 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-26 04:52:54.755546 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.755554 | orchestrator | 2025-05-26 04:52:54.755567 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-26 04:52:54.755576 | orchestrator | Monday 26 May 2025 04:48:31 +0000 (0:00:01.091) 0:00:08.880 ************ 2025-05-26 04:52:54.755584 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.755591 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.755599 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.755607 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.755614 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.755622 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.755630 | orchestrator | 2025-05-26 04:52:54.755638 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-26 04:52:54.755647 | orchestrator | Monday 26 May 2025 04:48:33 +0000 (0:00:01.439) 0:00:10.319 ************ 2025-05-26 04:52:54.755655 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:54.755663 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:54.755670 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:54.755678 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.755686 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.755693 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.755701 | orchestrator | 2025-05-26 04:52:54.755709 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-26 04:52:54.755717 | orchestrator | Monday 26 May 2025 04:48:33 +0000 (0:00:00.779) 0:00:11.099 ************ 2025-05-26 04:52:54.755725 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.755733 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.755740 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:54.755748 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:54.755756 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.755763 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:54.755771 | orchestrator | 2025-05-26 04:52:54.755779 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-26 04:52:54.755787 | orchestrator | Monday 26 May 2025 04:48:40 +0000 (0:00:06.482) 0:00:17.581 ************ 2025-05-26 04:52:54.755794 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.755802 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.755810 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.755818 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.755825 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.755833 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.755840 | orchestrator | 2025-05-26 04:52:54.755848 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-26 04:52:54.755856 | orchestrator | Monday 26 May 2025 04:48:41 +0000 (0:00:00.925) 0:00:18.506 ************ 2025-05-26 04:52:54.755864 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.755871 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.755879 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.755887 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.755894 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.755902 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.755910 | orchestrator | 2025-05-26 04:52:54.755918 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-26 04:52:54.755927 | orchestrator | Monday 26 May 2025 04:48:42 +0000 (0:00:01.660) 0:00:20.166 ************ 2025-05-26 04:52:54.755940 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.755948 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.755956 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.755963 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.755971 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.755979 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.755986 | orchestrator | 2025-05-26 04:52:54.755994 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-26 04:52:54.756002 | orchestrator | Monday 26 May 2025 04:48:43 +0000 (0:00:00.812) 0:00:20.978 ************ 2025-05-26 04:52:54.756010 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-26 04:52:54.756018 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-26 04:52:54.756026 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.756034 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-26 04:52:54.756042 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-26 04:52:54.756049 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.756057 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-26 04:52:54.756065 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-26 04:52:54.756072 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.756080 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-26 04:52:54.756088 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-26 04:52:54.756096 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.756104 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-26 04:52:54.756111 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-26 04:52:54.756119 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756127 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-26 04:52:54.756134 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-26 04:52:54.756142 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756150 | orchestrator | 2025-05-26 04:52:54.756158 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-26 04:52:54.756171 | orchestrator | Monday 26 May 2025 04:48:44 +0000 (0:00:01.241) 0:00:22.220 ************ 2025-05-26 04:52:54.756179 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.756187 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.756194 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.756202 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.756211 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756218 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756226 | orchestrator | 2025-05-26 04:52:54.756234 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-26 04:52:54.756241 | orchestrator | 2025-05-26 04:52:54.756249 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-26 04:52:54.756261 | orchestrator | Monday 26 May 2025 04:48:46 +0000 (0:00:01.652) 0:00:23.873 ************ 2025-05-26 04:52:54.756269 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.756276 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.756284 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.756292 | orchestrator | 2025-05-26 04:52:54.756300 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-26 04:52:54.756307 | orchestrator | Monday 26 May 2025 04:48:48 +0000 (0:00:01.717) 0:00:25.591 ************ 2025-05-26 04:52:54.756315 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.756323 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.756331 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.756338 | orchestrator | 2025-05-26 04:52:54.756346 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-26 04:52:54.756354 | orchestrator | Monday 26 May 2025 04:48:49 +0000 (0:00:01.347) 0:00:26.938 ************ 2025-05-26 04:52:54.756361 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.756375 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.756382 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.756390 | orchestrator | 2025-05-26 04:52:54.756398 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-26 04:52:54.756406 | orchestrator | Monday 26 May 2025 04:48:50 +0000 (0:00:01.218) 0:00:28.157 ************ 2025-05-26 04:52:54.756414 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.756421 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.756429 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.756453 | orchestrator | 2025-05-26 04:52:54.756462 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-26 04:52:54.756470 | orchestrator | Monday 26 May 2025 04:48:51 +0000 (0:00:00.951) 0:00:29.108 ************ 2025-05-26 04:52:54.756478 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.756486 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756493 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756502 | orchestrator | 2025-05-26 04:52:54.756509 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-26 04:52:54.756517 | orchestrator | Monday 26 May 2025 04:48:52 +0000 (0:00:00.537) 0:00:29.646 ************ 2025-05-26 04:52:54.756525 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:52:54.756533 | orchestrator | 2025-05-26 04:52:54.756541 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-26 04:52:54.756549 | orchestrator | Monday 26 May 2025 04:48:53 +0000 (0:00:00.877) 0:00:30.524 ************ 2025-05-26 04:52:54.756557 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.756564 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.756572 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.756580 | orchestrator | 2025-05-26 04:52:54.756587 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-26 04:52:54.756595 | orchestrator | Monday 26 May 2025 04:48:56 +0000 (0:00:02.956) 0:00:33.480 ************ 2025-05-26 04:52:54.756603 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756611 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756618 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.756626 | orchestrator | 2025-05-26 04:52:54.756634 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-26 04:52:54.756642 | orchestrator | Monday 26 May 2025 04:48:57 +0000 (0:00:00.877) 0:00:34.358 ************ 2025-05-26 04:52:54.756650 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756657 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756665 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.756673 | orchestrator | 2025-05-26 04:52:54.756681 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-26 04:52:54.756688 | orchestrator | Monday 26 May 2025 04:48:58 +0000 (0:00:00.978) 0:00:35.337 ************ 2025-05-26 04:52:54.756696 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756704 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756712 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.756719 | orchestrator | 2025-05-26 04:52:54.756727 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-26 04:52:54.756735 | orchestrator | Monday 26 May 2025 04:48:59 +0000 (0:00:01.734) 0:00:37.071 ************ 2025-05-26 04:52:54.756742 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.756750 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756758 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756766 | orchestrator | 2025-05-26 04:52:54.756774 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-26 04:52:54.756781 | orchestrator | Monday 26 May 2025 04:49:00 +0000 (0:00:00.585) 0:00:37.657 ************ 2025-05-26 04:52:54.756789 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.756797 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.756810 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.756829 | orchestrator | 2025-05-26 04:52:54.756841 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-26 04:52:54.756854 | orchestrator | Monday 26 May 2025 04:49:00 +0000 (0:00:00.396) 0:00:38.054 ************ 2025-05-26 04:52:54.756867 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.756880 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.756893 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.756907 | orchestrator | 2025-05-26 04:52:54.756915 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-26 04:52:54.756923 | orchestrator | Monday 26 May 2025 04:49:02 +0000 (0:00:01.572) 0:00:39.626 ************ 2025-05-26 04:52:54.756936 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-26 04:52:54.756945 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-26 04:52:54.756953 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-26 04:52:54.756961 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-26 04:52:54.756969 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-26 04:52:54.756977 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-26 04:52:54.756985 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-26 04:52:54.756993 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-26 04:52:54.757001 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-26 04:52:54.757008 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-26 04:52:54.757016 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-26 04:52:54.757024 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-26 04:52:54.757032 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-26 04:52:54.757040 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.757048 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.757130 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.757141 | orchestrator | 2025-05-26 04:52:54.757150 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-26 04:52:54.757158 | orchestrator | Monday 26 May 2025 04:49:57 +0000 (0:00:55.642) 0:01:35.268 ************ 2025-05-26 04:52:54.757165 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.757173 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.757181 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.757189 | orchestrator | 2025-05-26 04:52:54.757197 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-26 04:52:54.757205 | orchestrator | Monday 26 May 2025 04:49:58 +0000 (0:00:00.279) 0:01:35.548 ************ 2025-05-26 04:52:54.757212 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.757220 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.757228 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.757236 | orchestrator | 2025-05-26 04:52:54.757244 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-26 04:52:54.757258 | orchestrator | Monday 26 May 2025 04:49:59 +0000 (0:00:00.961) 0:01:36.509 ************ 2025-05-26 04:52:54.757266 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.757274 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.757282 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.757290 | orchestrator | 2025-05-26 04:52:54.757298 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-26 04:52:54.757305 | orchestrator | Monday 26 May 2025 04:50:00 +0000 (0:00:01.190) 0:01:37.700 ************ 2025-05-26 04:52:54.757313 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.757321 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.757329 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.757337 | orchestrator | 2025-05-26 04:52:54.757345 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-26 04:52:54.757352 | orchestrator | Monday 26 May 2025 04:50:16 +0000 (0:00:16.096) 0:01:53.797 ************ 2025-05-26 04:52:54.757360 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.757369 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.757376 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.757384 | orchestrator | 2025-05-26 04:52:54.757392 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-26 04:52:54.757400 | orchestrator | Monday 26 May 2025 04:50:17 +0000 (0:00:00.725) 0:01:54.522 ************ 2025-05-26 04:52:54.757408 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.757416 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.757425 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.757432 | orchestrator | 2025-05-26 04:52:54.757456 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-26 04:52:54.757464 | orchestrator | Monday 26 May 2025 04:50:17 +0000 (0:00:00.696) 0:01:55.219 ************ 2025-05-26 04:52:54.757472 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.757480 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.757488 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.757496 | orchestrator | 2025-05-26 04:52:54.757503 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-26 04:52:54.757511 | orchestrator | Monday 26 May 2025 04:50:18 +0000 (0:00:00.915) 0:01:56.135 ************ 2025-05-26 04:52:54.757520 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.757527 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.757535 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.757543 | orchestrator | 2025-05-26 04:52:54.757556 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-26 04:52:54.757565 | orchestrator | Monday 26 May 2025 04:50:20 +0000 (0:00:01.177) 0:01:57.312 ************ 2025-05-26 04:52:54.757572 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.757580 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.757588 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.757596 | orchestrator | 2025-05-26 04:52:54.759931 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-26 04:52:54.759985 | orchestrator | Monday 26 May 2025 04:50:20 +0000 (0:00:00.365) 0:01:57.677 ************ 2025-05-26 04:52:54.759993 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.760000 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.760007 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.760014 | orchestrator | 2025-05-26 04:52:54.760021 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-26 04:52:54.760028 | orchestrator | Monday 26 May 2025 04:50:21 +0000 (0:00:00.672) 0:01:58.350 ************ 2025-05-26 04:52:54.760034 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.760041 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.760049 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.760059 | orchestrator | 2025-05-26 04:52:54.760069 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-26 04:52:54.760079 | orchestrator | Monday 26 May 2025 04:50:21 +0000 (0:00:00.616) 0:01:58.967 ************ 2025-05-26 04:52:54.760103 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.760114 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.760124 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.760133 | orchestrator | 2025-05-26 04:52:54.760144 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-26 04:52:54.760155 | orchestrator | Monday 26 May 2025 04:50:22 +0000 (0:00:01.106) 0:02:00.073 ************ 2025-05-26 04:52:54.760166 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:52:54.760173 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:52:54.760180 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:52:54.760186 | orchestrator | 2025-05-26 04:52:54.760193 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-26 04:52:54.760200 | orchestrator | Monday 26 May 2025 04:50:23 +0000 (0:00:00.887) 0:02:00.960 ************ 2025-05-26 04:52:54.760207 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.760213 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.760220 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.760226 | orchestrator | 2025-05-26 04:52:54.760233 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-26 04:52:54.760239 | orchestrator | Monday 26 May 2025 04:50:23 +0000 (0:00:00.273) 0:02:01.234 ************ 2025-05-26 04:52:54.760246 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.760252 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.760259 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.760265 | orchestrator | 2025-05-26 04:52:54.760272 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-26 04:52:54.760278 | orchestrator | Monday 26 May 2025 04:50:24 +0000 (0:00:00.288) 0:02:01.523 ************ 2025-05-26 04:52:54.760285 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.760292 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.760298 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.760305 | orchestrator | 2025-05-26 04:52:54.760311 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-26 04:52:54.760318 | orchestrator | Monday 26 May 2025 04:50:25 +0000 (0:00:01.058) 0:02:02.581 ************ 2025-05-26 04:52:54.760324 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.760331 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.760337 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.760344 | orchestrator | 2025-05-26 04:52:54.760351 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-26 04:52:54.760359 | orchestrator | Monday 26 May 2025 04:50:25 +0000 (0:00:00.592) 0:02:03.174 ************ 2025-05-26 04:52:54.760366 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-26 04:52:54.760373 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-26 04:52:54.760380 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-26 04:52:54.760386 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-26 04:52:54.760393 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-26 04:52:54.760400 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-26 04:52:54.760406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-26 04:52:54.760413 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-26 04:52:54.760419 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-26 04:52:54.760426 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-26 04:52:54.760432 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-26 04:52:54.760492 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-26 04:52:54.760501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-26 04:52:54.760507 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-26 04:52:54.760514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-26 04:52:54.760533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-26 04:52:54.760540 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-26 04:52:54.760553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-26 04:52:54.760560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-26 04:52:54.760566 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-26 04:52:54.760573 | orchestrator | 2025-05-26 04:52:54.760580 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-26 04:52:54.760586 | orchestrator | 2025-05-26 04:52:54.760593 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-26 04:52:54.760600 | orchestrator | Monday 26 May 2025 04:50:28 +0000 (0:00:02.989) 0:02:06.163 ************ 2025-05-26 04:52:54.760606 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:54.760613 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:54.760619 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:54.760626 | orchestrator | 2025-05-26 04:52:54.760633 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-26 04:52:54.760639 | orchestrator | Monday 26 May 2025 04:50:29 +0000 (0:00:00.575) 0:02:06.738 ************ 2025-05-26 04:52:54.760646 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:54.760652 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:54.760659 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:54.760665 | orchestrator | 2025-05-26 04:52:54.760672 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-26 04:52:54.760678 | orchestrator | Monday 26 May 2025 04:50:30 +0000 (0:00:00.599) 0:02:07.338 ************ 2025-05-26 04:52:54.760685 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:54.760691 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:54.760698 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:54.760704 | orchestrator | 2025-05-26 04:52:54.760711 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-26 04:52:54.760718 | orchestrator | Monday 26 May 2025 04:50:30 +0000 (0:00:00.316) 0:02:07.655 ************ 2025-05-26 04:52:54.760724 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:52:54.760731 | orchestrator | 2025-05-26 04:52:54.760737 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-26 04:52:54.760744 | orchestrator | Monday 26 May 2025 04:50:31 +0000 (0:00:00.718) 0:02:08.374 ************ 2025-05-26 04:52:54.760751 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.760757 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.760763 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.760769 | orchestrator | 2025-05-26 04:52:54.760776 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-26 04:52:54.760782 | orchestrator | Monday 26 May 2025 04:50:31 +0000 (0:00:00.303) 0:02:08.677 ************ 2025-05-26 04:52:54.760788 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.760794 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.760800 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.760806 | orchestrator | 2025-05-26 04:52:54.760812 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-26 04:52:54.760823 | orchestrator | Monday 26 May 2025 04:50:31 +0000 (0:00:00.294) 0:02:08.972 ************ 2025-05-26 04:52:54.760829 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.760835 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.760841 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.760847 | orchestrator | 2025-05-26 04:52:54.760853 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-26 04:52:54.760859 | orchestrator | Monday 26 May 2025 04:50:31 +0000 (0:00:00.293) 0:02:09.265 ************ 2025-05-26 04:52:54.760865 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:54.760871 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:54.760878 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:54.760884 | orchestrator | 2025-05-26 04:52:54.760890 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-26 04:52:54.760896 | orchestrator | Monday 26 May 2025 04:50:33 +0000 (0:00:01.406) 0:02:10.671 ************ 2025-05-26 04:52:54.760902 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:52:54.760908 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:52:54.760914 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:52:54.760920 | orchestrator | 2025-05-26 04:52:54.760926 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-26 04:52:54.760932 | orchestrator | 2025-05-26 04:52:54.760938 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-26 04:52:54.760944 | orchestrator | Monday 26 May 2025 04:50:42 +0000 (0:00:09.595) 0:02:20.267 ************ 2025-05-26 04:52:54.760951 | orchestrator | ok: [testbed-manager] 2025-05-26 04:52:54.760957 | orchestrator | 2025-05-26 04:52:54.760963 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-26 04:52:54.760969 | orchestrator | Monday 26 May 2025 04:50:44 +0000 (0:00:01.294) 0:02:21.561 ************ 2025-05-26 04:52:54.760975 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.760981 | orchestrator | 2025-05-26 04:52:54.760987 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-26 04:52:54.760993 | orchestrator | Monday 26 May 2025 04:50:44 +0000 (0:00:00.416) 0:02:21.978 ************ 2025-05-26 04:52:54.760999 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-26 04:52:54.761006 | orchestrator | 2025-05-26 04:52:54.761012 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-26 04:52:54.761018 | orchestrator | Monday 26 May 2025 04:50:45 +0000 (0:00:00.944) 0:02:22.922 ************ 2025-05-26 04:52:54.761024 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.761030 | orchestrator | 2025-05-26 04:52:54.761036 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-26 04:52:54.761043 | orchestrator | Monday 26 May 2025 04:50:46 +0000 (0:00:00.845) 0:02:23.767 ************ 2025-05-26 04:52:54.761054 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.761060 | orchestrator | 2025-05-26 04:52:54.761067 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-26 04:52:54.761073 | orchestrator | Monday 26 May 2025 04:50:47 +0000 (0:00:00.568) 0:02:24.335 ************ 2025-05-26 04:52:54.761079 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-26 04:52:54.761085 | orchestrator | 2025-05-26 04:52:54.761094 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-26 04:52:54.761101 | orchestrator | Monday 26 May 2025 04:50:48 +0000 (0:00:01.648) 0:02:25.984 ************ 2025-05-26 04:52:54.761107 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-26 04:52:54.761113 | orchestrator | 2025-05-26 04:52:54.761119 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-26 04:52:54.761125 | orchestrator | Monday 26 May 2025 04:50:49 +0000 (0:00:00.849) 0:02:26.833 ************ 2025-05-26 04:52:54.761131 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.761138 | orchestrator | 2025-05-26 04:52:54.761144 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-26 04:52:54.761157 | orchestrator | Monday 26 May 2025 04:50:49 +0000 (0:00:00.426) 0:02:27.260 ************ 2025-05-26 04:52:54.761163 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.761169 | orchestrator | 2025-05-26 04:52:54.761179 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-26 04:52:54.761189 | orchestrator | 2025-05-26 04:52:54.761198 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-26 04:52:54.761209 | orchestrator | Monday 26 May 2025 04:50:50 +0000 (0:00:00.479) 0:02:27.740 ************ 2025-05-26 04:52:54.761218 | orchestrator | ok: [testbed-manager] 2025-05-26 04:52:54.761227 | orchestrator | 2025-05-26 04:52:54.761237 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-26 04:52:54.761247 | orchestrator | Monday 26 May 2025 04:50:50 +0000 (0:00:00.252) 0:02:27.992 ************ 2025-05-26 04:52:54.761256 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-26 04:52:54.761266 | orchestrator | 2025-05-26 04:52:54.761275 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-26 04:52:54.761284 | orchestrator | Monday 26 May 2025 04:50:51 +0000 (0:00:00.520) 0:02:28.513 ************ 2025-05-26 04:52:54.761293 | orchestrator | ok: [testbed-manager] 2025-05-26 04:52:54.761303 | orchestrator | 2025-05-26 04:52:54.761312 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-26 04:52:54.761322 | orchestrator | Monday 26 May 2025 04:50:52 +0000 (0:00:00.939) 0:02:29.452 ************ 2025-05-26 04:52:54.761331 | orchestrator | ok: [testbed-manager] 2025-05-26 04:52:54.761342 | orchestrator | 2025-05-26 04:52:54.761352 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-26 04:52:54.761361 | orchestrator | Monday 26 May 2025 04:50:53 +0000 (0:00:01.539) 0:02:30.992 ************ 2025-05-26 04:52:54.761371 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.761381 | orchestrator | 2025-05-26 04:52:54.761392 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-26 04:52:54.761402 | orchestrator | Monday 26 May 2025 04:50:54 +0000 (0:00:00.792) 0:02:31.784 ************ 2025-05-26 04:52:54.761412 | orchestrator | ok: [testbed-manager] 2025-05-26 04:52:54.761422 | orchestrator | 2025-05-26 04:52:54.761432 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-26 04:52:54.761462 | orchestrator | Monday 26 May 2025 04:50:54 +0000 (0:00:00.450) 0:02:32.235 ************ 2025-05-26 04:52:54.761473 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.761482 | orchestrator | 2025-05-26 04:52:54.761492 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-26 04:52:54.761502 | orchestrator | Monday 26 May 2025 04:51:02 +0000 (0:00:07.102) 0:02:39.337 ************ 2025-05-26 04:52:54.761513 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.761523 | orchestrator | 2025-05-26 04:52:54.761533 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-26 04:52:54.761543 | orchestrator | Monday 26 May 2025 04:51:13 +0000 (0:00:11.801) 0:02:51.138 ************ 2025-05-26 04:52:54.761554 | orchestrator | ok: [testbed-manager] 2025-05-26 04:52:54.761565 | orchestrator | 2025-05-26 04:52:54.761575 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-26 04:52:54.761585 | orchestrator | 2025-05-26 04:52:54.761595 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-26 04:52:54.761606 | orchestrator | Monday 26 May 2025 04:51:14 +0000 (0:00:00.482) 0:02:51.621 ************ 2025-05-26 04:52:54.761616 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.761627 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.761637 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.761648 | orchestrator | 2025-05-26 04:52:54.761658 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-26 04:52:54.761669 | orchestrator | Monday 26 May 2025 04:51:14 +0000 (0:00:00.407) 0:02:52.029 ************ 2025-05-26 04:52:54.761679 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.761700 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.761710 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.761720 | orchestrator | 2025-05-26 04:52:54.761730 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-26 04:52:54.761740 | orchestrator | Monday 26 May 2025 04:51:14 +0000 (0:00:00.250) 0:02:52.279 ************ 2025-05-26 04:52:54.761752 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:52:54.761762 | orchestrator | 2025-05-26 04:52:54.761772 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-26 04:52:54.761782 | orchestrator | Monday 26 May 2025 04:51:15 +0000 (0:00:00.668) 0:02:52.948 ************ 2025-05-26 04:52:54.761793 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-26 04:52:54.761803 | orchestrator | 2025-05-26 04:52:54.761814 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-26 04:52:54.761824 | orchestrator | Monday 26 May 2025 04:51:16 +0000 (0:00:00.799) 0:02:53.748 ************ 2025-05-26 04:52:54.761842 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-26 04:52:54.761853 | orchestrator | 2025-05-26 04:52:54.761864 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-26 04:52:54.761873 | orchestrator | Monday 26 May 2025 04:51:17 +0000 (0:00:00.886) 0:02:54.635 ************ 2025-05-26 04:52:54.761884 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.761894 | orchestrator | 2025-05-26 04:52:54.761910 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-26 04:52:54.761920 | orchestrator | Monday 26 May 2025 04:51:17 +0000 (0:00:00.554) 0:02:55.189 ************ 2025-05-26 04:52:54.761931 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-26 04:52:54.761941 | orchestrator | 2025-05-26 04:52:54.761951 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-26 04:52:54.761961 | orchestrator | Monday 26 May 2025 04:51:18 +0000 (0:00:01.031) 0:02:56.221 ************ 2025-05-26 04:52:54.761971 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.761981 | orchestrator | 2025-05-26 04:52:54.761992 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-26 04:52:54.762002 | orchestrator | Monday 26 May 2025 04:51:19 +0000 (0:00:00.368) 0:02:56.590 ************ 2025-05-26 04:52:54.762048 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.762060 | orchestrator | 2025-05-26 04:52:54.762071 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-26 04:52:54.762082 | orchestrator | Monday 26 May 2025 04:51:19 +0000 (0:00:00.240) 0:02:56.830 ************ 2025-05-26 04:52:54.762092 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.762103 | orchestrator | 2025-05-26 04:52:54.762113 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-26 04:52:54.762124 | orchestrator | Monday 26 May 2025 04:51:19 +0000 (0:00:00.209) 0:02:57.040 ************ 2025-05-26 04:52:54.762134 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.762144 | orchestrator | 2025-05-26 04:52:54.762154 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-26 04:52:54.762164 | orchestrator | Monday 26 May 2025 04:51:19 +0000 (0:00:00.224) 0:02:57.265 ************ 2025-05-26 04:52:54.762174 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-26 04:52:54.762185 | orchestrator | 2025-05-26 04:52:54.762196 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-26 04:52:54.762206 | orchestrator | Monday 26 May 2025 04:51:25 +0000 (0:00:05.226) 0:03:02.491 ************ 2025-05-26 04:52:54.762216 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-26 04:52:54.762226 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-26 04:52:54.762236 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-26 04:52:54.762246 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-26 04:52:54.762265 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-26 04:52:54.762276 | orchestrator | 2025-05-26 04:52:54.762286 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-26 04:52:54.762296 | orchestrator | Monday 26 May 2025 04:52:25 +0000 (0:01:00.576) 0:04:03.068 ************ 2025-05-26 04:52:54.762306 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-26 04:52:54.762316 | orchestrator | 2025-05-26 04:52:54.762327 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-26 04:52:54.762337 | orchestrator | Monday 26 May 2025 04:52:27 +0000 (0:00:01.419) 0:04:04.487 ************ 2025-05-26 04:52:54.762347 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-26 04:52:54.762358 | orchestrator | 2025-05-26 04:52:54.762368 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-26 04:52:54.762378 | orchestrator | Monday 26 May 2025 04:52:28 +0000 (0:00:01.418) 0:04:05.906 ************ 2025-05-26 04:52:54.762389 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-26 04:52:54.762399 | orchestrator | 2025-05-26 04:52:54.762409 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-26 04:52:54.762419 | orchestrator | Monday 26 May 2025 04:52:29 +0000 (0:00:01.041) 0:04:06.947 ************ 2025-05-26 04:52:54.762429 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.762456 | orchestrator | 2025-05-26 04:52:54.762467 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-26 04:52:54.762477 | orchestrator | Monday 26 May 2025 04:52:29 +0000 (0:00:00.195) 0:04:07.143 ************ 2025-05-26 04:52:54.762487 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-26 04:52:54.762497 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-26 04:52:54.762504 | orchestrator | 2025-05-26 04:52:54.762510 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-26 04:52:54.762516 | orchestrator | Monday 26 May 2025 04:52:32 +0000 (0:00:02.370) 0:04:09.513 ************ 2025-05-26 04:52:54.762522 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.762528 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.762535 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.762541 | orchestrator | 2025-05-26 04:52:54.762547 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-26 04:52:54.762553 | orchestrator | Monday 26 May 2025 04:52:32 +0000 (0:00:00.420) 0:04:09.933 ************ 2025-05-26 04:52:54.762559 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.762565 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.762571 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.762577 | orchestrator | 2025-05-26 04:52:54.762583 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-26 04:52:54.762589 | orchestrator | 2025-05-26 04:52:54.762596 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-26 04:52:54.762602 | orchestrator | Monday 26 May 2025 04:52:33 +0000 (0:00:00.892) 0:04:10.826 ************ 2025-05-26 04:52:54.762608 | orchestrator | ok: [testbed-manager] 2025-05-26 04:52:54.762614 | orchestrator | 2025-05-26 04:52:54.762626 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-26 04:52:54.762632 | orchestrator | Monday 26 May 2025 04:52:33 +0000 (0:00:00.190) 0:04:11.017 ************ 2025-05-26 04:52:54.762638 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-26 04:52:54.762644 | orchestrator | 2025-05-26 04:52:54.762655 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-26 04:52:54.762661 | orchestrator | Monday 26 May 2025 04:52:34 +0000 (0:00:00.410) 0:04:11.428 ************ 2025-05-26 04:52:54.762668 | orchestrator | changed: [testbed-manager] 2025-05-26 04:52:54.762674 | orchestrator | 2025-05-26 04:52:54.762680 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-26 04:52:54.762692 | orchestrator | 2025-05-26 04:52:54.762699 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-26 04:52:54.762705 | orchestrator | Monday 26 May 2025 04:52:40 +0000 (0:00:05.977) 0:04:17.405 ************ 2025-05-26 04:52:54.762711 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:52:54.762717 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:52:54.762723 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:52:54.762729 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:52:54.762735 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:52:54.762741 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:52:54.762747 | orchestrator | 2025-05-26 04:52:54.762754 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-26 04:52:54.762760 | orchestrator | Monday 26 May 2025 04:52:40 +0000 (0:00:00.587) 0:04:17.992 ************ 2025-05-26 04:52:54.762766 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-26 04:52:54.762772 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-26 04:52:54.762778 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-26 04:52:54.762784 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-26 04:52:54.762790 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-26 04:52:54.762796 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-26 04:52:54.762802 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-26 04:52:54.762808 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-26 04:52:54.762815 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-26 04:52:54.762821 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-26 04:52:54.762827 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-26 04:52:54.762833 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-26 04:52:54.762839 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-26 04:52:54.762845 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-26 04:52:54.762851 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-26 04:52:54.762857 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-26 04:52:54.762863 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-26 04:52:54.762869 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-26 04:52:54.762875 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-26 04:52:54.762881 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-26 04:52:54.762887 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-26 04:52:54.762893 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-26 04:52:54.762899 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-26 04:52:54.762905 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-26 04:52:54.762911 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-26 04:52:54.762918 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-26 04:52:54.762924 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-26 04:52:54.762934 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-26 04:52:54.762941 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-26 04:52:54.762947 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-26 04:52:54.762953 | orchestrator | 2025-05-26 04:52:54.762959 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-26 04:52:54.762965 | orchestrator | Monday 26 May 2025 04:52:52 +0000 (0:00:12.011) 0:04:30.003 ************ 2025-05-26 04:52:54.762971 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.762977 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.762983 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.762989 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.762999 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.763005 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.763012 | orchestrator | 2025-05-26 04:52:54.763018 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-26 04:52:54.763024 | orchestrator | Monday 26 May 2025 04:52:53 +0000 (0:00:00.436) 0:04:30.440 ************ 2025-05-26 04:52:54.763030 | orchestrator | skipping: [testbed-node-3] 2025-05-26 04:52:54.763040 | orchestrator | skipping: [testbed-node-4] 2025-05-26 04:52:54.763046 | orchestrator | skipping: [testbed-node-5] 2025-05-26 04:52:54.763052 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:52:54.763058 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:52:54.763064 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:52:54.763070 | orchestrator | 2025-05-26 04:52:54.763076 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:52:54.763082 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:52:54.763091 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-26 04:52:54.763098 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-26 04:52:54.763104 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-26 04:52:54.763111 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-26 04:52:54.763117 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-26 04:52:54.763123 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-26 04:52:54.763129 | orchestrator | 2025-05-26 04:52:54.763135 | orchestrator | 2025-05-26 04:52:54.763141 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:52:54.763147 | orchestrator | Monday 26 May 2025 04:52:53 +0000 (0:00:00.564) 0:04:31.004 ************ 2025-05-26 04:52:54.763154 | orchestrator | =============================================================================== 2025-05-26 04:52:54.763160 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 60.58s 2025-05-26 04:52:54.763166 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.64s 2025-05-26 04:52:54.763172 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 16.10s 2025-05-26 04:52:54.763178 | orchestrator | Manage labels ---------------------------------------------------------- 12.01s 2025-05-26 04:52:54.763184 | orchestrator | kubectl : Install required packages ------------------------------------ 11.80s 2025-05-26 04:52:54.763198 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.60s 2025-05-26 04:52:54.763204 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.10s 2025-05-26 04:52:54.763210 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.48s 2025-05-26 04:52:54.763216 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.98s 2025-05-26 04:52:54.763222 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.23s 2025-05-26 04:52:54.763229 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.99s 2025-05-26 04:52:54.763235 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.96s 2025-05-26 04:52:54.763241 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.37s 2025-05-26 04:52:54.763247 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.05s 2025-05-26 04:52:54.763253 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.73s 2025-05-26 04:52:54.763259 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.72s 2025-05-26 04:52:54.763265 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.66s 2025-05-26 04:52:54.763271 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.65s 2025-05-26 04:52:54.763277 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.65s 2025-05-26 04:52:54.763283 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.57s 2025-05-26 04:52:54.763290 | orchestrator | 2025-05-26 04:52:54 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:54.763296 | orchestrator | 2025-05-26 04:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:52:57.814885 | orchestrator | 2025-05-26 04:52:57 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:52:57.818254 | orchestrator | 2025-05-26 04:52:57 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:52:57.820910 | orchestrator | 2025-05-26 04:52:57 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:52:57.823009 | orchestrator | 2025-05-26 04:52:57 | INFO  | Task 8fdb143f-ac96-4162-bee6-5061f699f654 is in state STARTED 2025-05-26 04:52:57.824640 | orchestrator | 2025-05-26 04:52:57 | INFO  | Task 43c977dd-3091-4e3a-9a25-a6a8ab61d104 is in state STARTED 2025-05-26 04:52:57.825298 | orchestrator | 2025-05-26 04:52:57 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:52:57.825315 | orchestrator | 2025-05-26 04:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:00.881792 | orchestrator | 2025-05-26 04:53:00 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:00.883612 | orchestrator | 2025-05-26 04:53:00 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:00.888020 | orchestrator | 2025-05-26 04:53:00 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:00.891210 | orchestrator | 2025-05-26 04:53:00 | INFO  | Task 8fdb143f-ac96-4162-bee6-5061f699f654 is in state STARTED 2025-05-26 04:53:00.893974 | orchestrator | 2025-05-26 04:53:00 | INFO  | Task 43c977dd-3091-4e3a-9a25-a6a8ab61d104 is in state STARTED 2025-05-26 04:53:00.897598 | orchestrator | 2025-05-26 04:53:00 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:00.897650 | orchestrator | 2025-05-26 04:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:03.982177 | orchestrator | 2025-05-26 04:53:03 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:03.982406 | orchestrator | 2025-05-26 04:53:03 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:03.983416 | orchestrator | 2025-05-26 04:53:03 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:03.984259 | orchestrator | 2025-05-26 04:53:03 | INFO  | Task 8fdb143f-ac96-4162-bee6-5061f699f654 is in state STARTED 2025-05-26 04:53:03.986605 | orchestrator | 2025-05-26 04:53:03 | INFO  | Task 43c977dd-3091-4e3a-9a25-a6a8ab61d104 is in state SUCCESS 2025-05-26 04:53:03.987494 | orchestrator | 2025-05-26 04:53:03 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:03.987521 | orchestrator | 2025-05-26 04:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:07.040812 | orchestrator | 2025-05-26 04:53:07 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:07.042382 | orchestrator | 2025-05-26 04:53:07 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:07.043006 | orchestrator | 2025-05-26 04:53:07 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:07.044756 | orchestrator | 2025-05-26 04:53:07 | INFO  | Task 8fdb143f-ac96-4162-bee6-5061f699f654 is in state SUCCESS 2025-05-26 04:53:07.045992 | orchestrator | 2025-05-26 04:53:07 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:07.046071 | orchestrator | 2025-05-26 04:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:10.092036 | orchestrator | 2025-05-26 04:53:10 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:10.093542 | orchestrator | 2025-05-26 04:53:10 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:10.094742 | orchestrator | 2025-05-26 04:53:10 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:10.096361 | orchestrator | 2025-05-26 04:53:10 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:10.096390 | orchestrator | 2025-05-26 04:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:13.145882 | orchestrator | 2025-05-26 04:53:13 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:13.147386 | orchestrator | 2025-05-26 04:53:13 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:13.149211 | orchestrator | 2025-05-26 04:53:13 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:13.151171 | orchestrator | 2025-05-26 04:53:13 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:13.151201 | orchestrator | 2025-05-26 04:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:16.198105 | orchestrator | 2025-05-26 04:53:16 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:16.198727 | orchestrator | 2025-05-26 04:53:16 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:16.199215 | orchestrator | 2025-05-26 04:53:16 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:16.200139 | orchestrator | 2025-05-26 04:53:16 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:16.200174 | orchestrator | 2025-05-26 04:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:19.251685 | orchestrator | 2025-05-26 04:53:19 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:19.252656 | orchestrator | 2025-05-26 04:53:19 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:19.253694 | orchestrator | 2025-05-26 04:53:19 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:19.254609 | orchestrator | 2025-05-26 04:53:19 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:19.254647 | orchestrator | 2025-05-26 04:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:22.297897 | orchestrator | 2025-05-26 04:53:22 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:22.298346 | orchestrator | 2025-05-26 04:53:22 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:22.300487 | orchestrator | 2025-05-26 04:53:22 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:22.301107 | orchestrator | 2025-05-26 04:53:22 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:22.301129 | orchestrator | 2025-05-26 04:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:25.332134 | orchestrator | 2025-05-26 04:53:25 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:25.332245 | orchestrator | 2025-05-26 04:53:25 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:25.332886 | orchestrator | 2025-05-26 04:53:25 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:25.335976 | orchestrator | 2025-05-26 04:53:25 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:25.336002 | orchestrator | 2025-05-26 04:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:28.380891 | orchestrator | 2025-05-26 04:53:28 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:28.383204 | orchestrator | 2025-05-26 04:53:28 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:28.388396 | orchestrator | 2025-05-26 04:53:28 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:28.390795 | orchestrator | 2025-05-26 04:53:28 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:28.391179 | orchestrator | 2025-05-26 04:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:31.443788 | orchestrator | 2025-05-26 04:53:31 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:31.443890 | orchestrator | 2025-05-26 04:53:31 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:31.444283 | orchestrator | 2025-05-26 04:53:31 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:31.445295 | orchestrator | 2025-05-26 04:53:31 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:31.445469 | orchestrator | 2025-05-26 04:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:34.480861 | orchestrator | 2025-05-26 04:53:34 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:34.480994 | orchestrator | 2025-05-26 04:53:34 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:34.481021 | orchestrator | 2025-05-26 04:53:34 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:34.481038 | orchestrator | 2025-05-26 04:53:34 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:34.481089 | orchestrator | 2025-05-26 04:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:37.517788 | orchestrator | 2025-05-26 04:53:37 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:37.519927 | orchestrator | 2025-05-26 04:53:37 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:37.521871 | orchestrator | 2025-05-26 04:53:37 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state STARTED 2025-05-26 04:53:37.523951 | orchestrator | 2025-05-26 04:53:37 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:37.524027 | orchestrator | 2025-05-26 04:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:40.567553 | orchestrator | 2025-05-26 04:53:40 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:40.568154 | orchestrator | 2025-05-26 04:53:40 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:40.569471 | orchestrator | 2025-05-26 04:53:40 | INFO  | Task d54b9213-6e32-4a10-aee0-fcb1a7d7bd91 is in state SUCCESS 2025-05-26 04:53:40.569816 | orchestrator | 2025-05-26 04:53:40.569843 | orchestrator | 2025-05-26 04:53:40.569976 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-26 04:53:40.569990 | orchestrator | 2025-05-26 04:53:40.570002 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-26 04:53:40.570014 | orchestrator | Monday 26 May 2025 04:52:58 +0000 (0:00:00.280) 0:00:00.280 ************ 2025-05-26 04:53:40.570128 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-26 04:53:40.570146 | orchestrator | 2025-05-26 04:53:40.570164 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-26 04:53:40.570181 | orchestrator | Monday 26 May 2025 04:52:59 +0000 (0:00:00.931) 0:00:01.211 ************ 2025-05-26 04:53:40.570200 | orchestrator | changed: [testbed-manager] 2025-05-26 04:53:40.570218 | orchestrator | 2025-05-26 04:53:40.570235 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-26 04:53:40.570253 | orchestrator | Monday 26 May 2025 04:53:01 +0000 (0:00:01.473) 0:00:02.684 ************ 2025-05-26 04:53:40.570404 | orchestrator | changed: [testbed-manager] 2025-05-26 04:53:40.570424 | orchestrator | 2025-05-26 04:53:40.570826 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:53:40.570850 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:53:40.570865 | orchestrator | 2025-05-26 04:53:40.570878 | orchestrator | 2025-05-26 04:53:40.570891 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:53:40.570903 | orchestrator | Monday 26 May 2025 04:53:01 +0000 (0:00:00.520) 0:00:03.204 ************ 2025-05-26 04:53:40.570916 | orchestrator | =============================================================================== 2025-05-26 04:53:40.570928 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2025-05-26 04:53:40.570940 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2025-05-26 04:53:40.570953 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.52s 2025-05-26 04:53:40.570966 | orchestrator | 2025-05-26 04:53:40.570977 | orchestrator | 2025-05-26 04:53:40.570988 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-26 04:53:40.570999 | orchestrator | 2025-05-26 04:53:40.571010 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-26 04:53:40.571027 | orchestrator | Monday 26 May 2025 04:52:59 +0000 (0:00:00.189) 0:00:00.189 ************ 2025-05-26 04:53:40.571044 | orchestrator | ok: [testbed-manager] 2025-05-26 04:53:40.571064 | orchestrator | 2025-05-26 04:53:40.571080 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-26 04:53:40.571134 | orchestrator | Monday 26 May 2025 04:52:59 +0000 (0:00:00.574) 0:00:00.764 ************ 2025-05-26 04:53:40.571155 | orchestrator | ok: [testbed-manager] 2025-05-26 04:53:40.571174 | orchestrator | 2025-05-26 04:53:40.571188 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-26 04:53:40.571199 | orchestrator | Monday 26 May 2025 04:53:00 +0000 (0:00:00.823) 0:00:01.587 ************ 2025-05-26 04:53:40.571210 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-26 04:53:40.571373 | orchestrator | 2025-05-26 04:53:40.571392 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-26 04:53:40.571403 | orchestrator | Monday 26 May 2025 04:53:01 +0000 (0:00:00.720) 0:00:02.308 ************ 2025-05-26 04:53:40.571414 | orchestrator | changed: [testbed-manager] 2025-05-26 04:53:40.571424 | orchestrator | 2025-05-26 04:53:40.571435 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-26 04:53:40.571446 | orchestrator | Monday 26 May 2025 04:53:02 +0000 (0:00:01.232) 0:00:03.540 ************ 2025-05-26 04:53:40.571493 | orchestrator | changed: [testbed-manager] 2025-05-26 04:53:40.571505 | orchestrator | 2025-05-26 04:53:40.571516 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-26 04:53:40.571526 | orchestrator | Monday 26 May 2025 04:53:03 +0000 (0:00:00.744) 0:00:04.285 ************ 2025-05-26 04:53:40.571537 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-26 04:53:40.571552 | orchestrator | 2025-05-26 04:53:40.571570 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-26 04:53:40.571588 | orchestrator | Monday 26 May 2025 04:53:04 +0000 (0:00:01.579) 0:00:05.865 ************ 2025-05-26 04:53:40.571606 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-26 04:53:40.571624 | orchestrator | 2025-05-26 04:53:40.571642 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-26 04:53:40.571659 | orchestrator | Monday 26 May 2025 04:53:05 +0000 (0:00:00.902) 0:00:06.767 ************ 2025-05-26 04:53:40.571678 | orchestrator | ok: [testbed-manager] 2025-05-26 04:53:40.571696 | orchestrator | 2025-05-26 04:53:40.571713 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-26 04:53:40.571733 | orchestrator | Monday 26 May 2025 04:53:06 +0000 (0:00:00.400) 0:00:07.168 ************ 2025-05-26 04:53:40.571752 | orchestrator | ok: [testbed-manager] 2025-05-26 04:53:40.571770 | orchestrator | 2025-05-26 04:53:40.571788 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:53:40.571826 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:53:40.571847 | orchestrator | 2025-05-26 04:53:40.571866 | orchestrator | 2025-05-26 04:53:40.571885 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:53:40.571898 | orchestrator | Monday 26 May 2025 04:53:06 +0000 (0:00:00.310) 0:00:07.478 ************ 2025-05-26 04:53:40.571909 | orchestrator | =============================================================================== 2025-05-26 04:53:40.571920 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.58s 2025-05-26 04:53:40.571931 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2025-05-26 04:53:40.571941 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.90s 2025-05-26 04:53:40.571967 | orchestrator | Create .kube directory -------------------------------------------------- 0.82s 2025-05-26 04:53:40.571978 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.74s 2025-05-26 04:53:40.571989 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-05-26 04:53:40.572000 | orchestrator | Get home directory of operator user ------------------------------------- 0.57s 2025-05-26 04:53:40.572011 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2025-05-26 04:53:40.572024 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-05-26 04:53:40.572049 | orchestrator | 2025-05-26 04:53:40.572062 | orchestrator | 2025-05-26 04:53:40.572074 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-26 04:53:40.572086 | orchestrator | 2025-05-26 04:53:40.572098 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-26 04:53:40.572110 | orchestrator | Monday 26 May 2025 04:51:25 +0000 (0:00:00.406) 0:00:00.406 ************ 2025-05-26 04:53:40.572122 | orchestrator | ok: [localhost] => { 2025-05-26 04:53:40.572135 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-26 04:53:40.572149 | orchestrator | } 2025-05-26 04:53:40.572161 | orchestrator | 2025-05-26 04:53:40.572171 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-26 04:53:40.572182 | orchestrator | Monday 26 May 2025 04:51:25 +0000 (0:00:00.062) 0:00:00.469 ************ 2025-05-26 04:53:40.572194 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-26 04:53:40.572206 | orchestrator | ...ignoring 2025-05-26 04:53:40.572218 | orchestrator | 2025-05-26 04:53:40.572229 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-26 04:53:40.572240 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:03.563) 0:00:04.032 ************ 2025-05-26 04:53:40.572250 | orchestrator | skipping: [localhost] 2025-05-26 04:53:40.572261 | orchestrator | 2025-05-26 04:53:40.572271 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-26 04:53:40.572282 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:00.048) 0:00:04.081 ************ 2025-05-26 04:53:40.572293 | orchestrator | ok: [localhost] 2025-05-26 04:53:40.572303 | orchestrator | 2025-05-26 04:53:40.572314 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 04:53:40.572325 | orchestrator | 2025-05-26 04:53:40.572336 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-26 04:53:40.572346 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:00.226) 0:00:04.307 ************ 2025-05-26 04:53:40.572357 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:53:40.572368 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:53:40.572378 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:53:40.572389 | orchestrator | 2025-05-26 04:53:40.572400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-26 04:53:40.572410 | orchestrator | Monday 26 May 2025 04:51:30 +0000 (0:00:00.658) 0:00:04.965 ************ 2025-05-26 04:53:40.572421 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-26 04:53:40.572432 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-26 04:53:40.572443 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-26 04:53:40.572476 | orchestrator | 2025-05-26 04:53:40.572488 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-26 04:53:40.572499 | orchestrator | 2025-05-26 04:53:40.572510 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-26 04:53:40.572521 | orchestrator | Monday 26 May 2025 04:51:31 +0000 (0:00:00.970) 0:00:05.936 ************ 2025-05-26 04:53:40.572531 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:53:40.572542 | orchestrator | 2025-05-26 04:53:40.572553 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-26 04:53:40.572564 | orchestrator | Monday 26 May 2025 04:51:31 +0000 (0:00:00.580) 0:00:06.516 ************ 2025-05-26 04:53:40.572575 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:53:40.572585 | orchestrator | 2025-05-26 04:53:40.572596 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-26 04:53:40.572607 | orchestrator | Monday 26 May 2025 04:51:32 +0000 (0:00:00.904) 0:00:07.421 ************ 2025-05-26 04:53:40.572618 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.572636 | orchestrator | 2025-05-26 04:53:40.572647 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-26 04:53:40.572658 | orchestrator | Monday 26 May 2025 04:51:33 +0000 (0:00:00.329) 0:00:07.750 ************ 2025-05-26 04:53:40.572668 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.572679 | orchestrator | 2025-05-26 04:53:40.572690 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-26 04:53:40.572700 | orchestrator | Monday 26 May 2025 04:51:33 +0000 (0:00:00.319) 0:00:08.070 ************ 2025-05-26 04:53:40.572711 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.572721 | orchestrator | 2025-05-26 04:53:40.572732 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-26 04:53:40.572748 | orchestrator | Monday 26 May 2025 04:51:33 +0000 (0:00:00.322) 0:00:08.393 ************ 2025-05-26 04:53:40.572759 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.572770 | orchestrator | 2025-05-26 04:53:40.572781 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-26 04:53:40.572792 | orchestrator | Monday 26 May 2025 04:51:34 +0000 (0:00:00.870) 0:00:09.263 ************ 2025-05-26 04:53:40.572803 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:53:40.572813 | orchestrator | 2025-05-26 04:53:40.572824 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-26 04:53:40.572845 | orchestrator | Monday 26 May 2025 04:51:35 +0000 (0:00:00.643) 0:00:09.907 ************ 2025-05-26 04:53:40.572861 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:53:40.572880 | orchestrator | 2025-05-26 04:53:40.572898 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-26 04:53:40.572916 | orchestrator | Monday 26 May 2025 04:51:36 +0000 (0:00:00.885) 0:00:10.793 ************ 2025-05-26 04:53:40.572934 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.572952 | orchestrator | 2025-05-26 04:53:40.572970 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-26 04:53:40.572988 | orchestrator | Monday 26 May 2025 04:51:36 +0000 (0:00:00.407) 0:00:11.200 ************ 2025-05-26 04:53:40.573005 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.573023 | orchestrator | 2025-05-26 04:53:40.573040 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-26 04:53:40.573060 | orchestrator | Monday 26 May 2025 04:51:36 +0000 (0:00:00.394) 0:00:11.595 ************ 2025-05-26 04:53:40.573085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573174 | orchestrator | 2025-05-26 04:53:40.573185 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-26 04:53:40.573196 | orchestrator | Monday 26 May 2025 04:51:38 +0000 (0:00:01.218) 0:00:12.813 ************ 2025-05-26 04:53:40.573220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573264 | orchestrator | 2025-05-26 04:53:40.573275 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-26 04:53:40.573286 | orchestrator | Monday 26 May 2025 04:51:39 +0000 (0:00:01.614) 0:00:14.427 ************ 2025-05-26 04:53:40.573297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-26 04:53:40.573308 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-26 04:53:40.573319 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-26 04:53:40.573330 | orchestrator | 2025-05-26 04:53:40.573341 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-26 04:53:40.573352 | orchestrator | Monday 26 May 2025 04:51:41 +0000 (0:00:02.178) 0:00:16.606 ************ 2025-05-26 04:53:40.573363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-26 04:53:40.573374 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-26 04:53:40.573384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-26 04:53:40.573395 | orchestrator | 2025-05-26 04:53:40.573406 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-26 04:53:40.573423 | orchestrator | Monday 26 May 2025 04:51:44 +0000 (0:00:02.559) 0:00:19.166 ************ 2025-05-26 04:53:40.573434 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-26 04:53:40.573444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-26 04:53:40.573488 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-26 04:53:40.573499 | orchestrator | 2025-05-26 04:53:40.573510 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-26 04:53:40.573521 | orchestrator | Monday 26 May 2025 04:51:46 +0000 (0:00:01.785) 0:00:20.952 ************ 2025-05-26 04:53:40.573532 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-26 04:53:40.573542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-26 04:53:40.573553 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-26 04:53:40.573564 | orchestrator | 2025-05-26 04:53:40.573575 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-26 04:53:40.573586 | orchestrator | Monday 26 May 2025 04:51:48 +0000 (0:00:01.664) 0:00:22.617 ************ 2025-05-26 04:53:40.573596 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-26 04:53:40.573607 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-26 04:53:40.573625 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-26 04:53:40.573636 | orchestrator | 2025-05-26 04:53:40.573646 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-26 04:53:40.573657 | orchestrator | Monday 26 May 2025 04:51:49 +0000 (0:00:01.969) 0:00:24.586 ************ 2025-05-26 04:53:40.573668 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-26 04:53:40.573679 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-26 04:53:40.573690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-26 04:53:40.573700 | orchestrator | 2025-05-26 04:53:40.573712 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-26 04:53:40.573723 | orchestrator | Monday 26 May 2025 04:51:51 +0000 (0:00:01.313) 0:00:25.899 ************ 2025-05-26 04:53:40.573733 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.573744 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:53:40.573755 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:53:40.573766 | orchestrator | 2025-05-26 04:53:40.573777 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-26 04:53:40.573787 | orchestrator | Monday 26 May 2025 04:51:51 +0000 (0:00:00.470) 0:00:26.370 ************ 2025-05-26 04:53:40.573879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:53:40.573952 | orchestrator | 2025-05-26 04:53:40.573963 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-26 04:53:40.573975 | orchestrator | Monday 26 May 2025 04:51:53 +0000 (0:00:02.000) 0:00:28.370 ************ 2025-05-26 04:53:40.573985 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:53:40.573996 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:53:40.574006 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:53:40.574093 | orchestrator | 2025-05-26 04:53:40.574109 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-26 04:53:40.574120 | orchestrator | Monday 26 May 2025 04:51:54 +0000 (0:00:00.964) 0:00:29.335 ************ 2025-05-26 04:53:40.574130 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:53:40.574141 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:53:40.574152 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:53:40.574163 | orchestrator | 2025-05-26 04:53:40.574174 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-26 04:53:40.574184 | orchestrator | Monday 26 May 2025 04:52:03 +0000 (0:00:08.289) 0:00:37.625 ************ 2025-05-26 04:53:40.574195 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:53:40.574206 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:53:40.574216 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:53:40.574227 | orchestrator | 2025-05-26 04:53:40.574238 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-26 04:53:40.574249 | orchestrator | 2025-05-26 04:53:40.574260 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-26 04:53:40.574270 | orchestrator | Monday 26 May 2025 04:52:03 +0000 (0:00:00.642) 0:00:38.268 ************ 2025-05-26 04:53:40.574281 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:53:40.574292 | orchestrator | 2025-05-26 04:53:40.574303 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-26 04:53:40.574313 | orchestrator | Monday 26 May 2025 04:52:04 +0000 (0:00:00.700) 0:00:38.969 ************ 2025-05-26 04:53:40.574324 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:53:40.574335 | orchestrator | 2025-05-26 04:53:40.574345 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-26 04:53:40.574356 | orchestrator | Monday 26 May 2025 04:52:04 +0000 (0:00:00.158) 0:00:39.128 ************ 2025-05-26 04:53:40.574367 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:53:40.574378 | orchestrator | 2025-05-26 04:53:40.574388 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-26 04:53:40.574399 | orchestrator | Monday 26 May 2025 04:52:11 +0000 (0:00:06.749) 0:00:45.877 ************ 2025-05-26 04:53:40.574410 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:53:40.574421 | orchestrator | 2025-05-26 04:53:40.574431 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-26 04:53:40.574442 | orchestrator | 2025-05-26 04:53:40.574470 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-26 04:53:40.574481 | orchestrator | Monday 26 May 2025 04:53:01 +0000 (0:00:50.167) 0:01:36.045 ************ 2025-05-26 04:53:40.574492 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:53:40.574503 | orchestrator | 2025-05-26 04:53:40.574513 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-26 04:53:40.574524 | orchestrator | Monday 26 May 2025 04:53:02 +0000 (0:00:00.618) 0:01:36.663 ************ 2025-05-26 04:53:40.574543 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:53:40.574553 | orchestrator | 2025-05-26 04:53:40.574564 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-26 04:53:40.574575 | orchestrator | Monday 26 May 2025 04:53:02 +0000 (0:00:00.467) 0:01:37.131 ************ 2025-05-26 04:53:40.574585 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:53:40.574596 | orchestrator | 2025-05-26 04:53:40.574613 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-26 04:53:40.574624 | orchestrator | Monday 26 May 2025 04:53:04 +0000 (0:00:01.885) 0:01:39.016 ************ 2025-05-26 04:53:40.574634 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:53:40.574645 | orchestrator | 2025-05-26 04:53:40.574656 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-26 04:53:40.574667 | orchestrator | 2025-05-26 04:53:40.574678 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-26 04:53:40.574688 | orchestrator | Monday 26 May 2025 04:53:18 +0000 (0:00:14.224) 0:01:53.241 ************ 2025-05-26 04:53:40.574699 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:53:40.574710 | orchestrator | 2025-05-26 04:53:40.574729 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-26 04:53:40.574740 | orchestrator | Monday 26 May 2025 04:53:19 +0000 (0:00:00.721) 0:01:53.962 ************ 2025-05-26 04:53:40.574751 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:53:40.574762 | orchestrator | 2025-05-26 04:53:40.574772 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-26 04:53:40.574783 | orchestrator | Monday 26 May 2025 04:53:19 +0000 (0:00:00.367) 0:01:54.329 ************ 2025-05-26 04:53:40.574794 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:53:40.574804 | orchestrator | 2025-05-26 04:53:40.574815 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-26 04:53:40.574826 | orchestrator | Monday 26 May 2025 04:53:21 +0000 (0:00:02.267) 0:01:56.596 ************ 2025-05-26 04:53:40.574837 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:53:40.574847 | orchestrator | 2025-05-26 04:53:40.574858 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-26 04:53:40.574869 | orchestrator | 2025-05-26 04:53:40.574880 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-26 04:53:40.574891 | orchestrator | Monday 26 May 2025 04:53:36 +0000 (0:00:14.655) 0:02:11.252 ************ 2025-05-26 04:53:40.574902 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:53:40.574912 | orchestrator | 2025-05-26 04:53:40.574923 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-26 04:53:40.574934 | orchestrator | Monday 26 May 2025 04:53:37 +0000 (0:00:00.553) 0:02:11.805 ************ 2025-05-26 04:53:40.574945 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-26 04:53:40.574955 | orchestrator | enable_outward_rabbitmq_True 2025-05-26 04:53:40.574966 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-26 04:53:40.574976 | orchestrator | outward_rabbitmq_restart 2025-05-26 04:53:40.574987 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:53:40.574998 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:53:40.575009 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:53:40.575019 | orchestrator | 2025-05-26 04:53:40.575030 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-26 04:53:40.575041 | orchestrator | skipping: no hosts matched 2025-05-26 04:53:40.575051 | orchestrator | 2025-05-26 04:53:40.575062 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-26 04:53:40.575073 | orchestrator | skipping: no hosts matched 2025-05-26 04:53:40.575084 | orchestrator | 2025-05-26 04:53:40.575095 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-26 04:53:40.575105 | orchestrator | skipping: no hosts matched 2025-05-26 04:53:40.575116 | orchestrator | 2025-05-26 04:53:40.575134 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:53:40.575146 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-26 04:53:40.575157 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-26 04:53:40.575168 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:53:40.575179 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-26 04:53:40.575190 | orchestrator | 2025-05-26 04:53:40.575200 | orchestrator | 2025-05-26 04:53:40.575211 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:53:40.575222 | orchestrator | Monday 26 May 2025 04:53:39 +0000 (0:00:02.486) 0:02:14.292 ************ 2025-05-26 04:53:40.575233 | orchestrator | =============================================================================== 2025-05-26 04:53:40.575244 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.05s 2025-05-26 04:53:40.575254 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.90s 2025-05-26 04:53:40.575265 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.29s 2025-05-26 04:53:40.575275 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.56s 2025-05-26 04:53:40.575286 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.56s 2025-05-26 04:53:40.575297 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.49s 2025-05-26 04:53:40.575307 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.18s 2025-05-26 04:53:40.575324 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.04s 2025-05-26 04:53:40.575344 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.00s 2025-05-26 04:53:40.575356 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.97s 2025-05-26 04:53:40.575366 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.79s 2025-05-26 04:53:40.575382 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.66s 2025-05-26 04:53:40.575393 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.61s 2025-05-26 04:53:40.575403 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.31s 2025-05-26 04:53:40.575414 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.22s 2025-05-26 04:53:40.575425 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.99s 2025-05-26 04:53:40.575435 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-05-26 04:53:40.575467 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.96s 2025-05-26 04:53:40.575479 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.90s 2025-05-26 04:53:40.575489 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.89s 2025-05-26 04:53:40.575501 | orchestrator | 2025-05-26 04:53:40 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:40.575511 | orchestrator | 2025-05-26 04:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:43.615134 | orchestrator | 2025-05-26 04:53:43 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:43.615261 | orchestrator | 2025-05-26 04:53:43 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:43.618954 | orchestrator | 2025-05-26 04:53:43 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:43.619025 | orchestrator | 2025-05-26 04:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:46.669770 | orchestrator | 2025-05-26 04:53:46 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:46.669921 | orchestrator | 2025-05-26 04:53:46 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:46.672122 | orchestrator | 2025-05-26 04:53:46 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:46.672240 | orchestrator | 2025-05-26 04:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:49.720735 | orchestrator | 2025-05-26 04:53:49 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:49.722261 | orchestrator | 2025-05-26 04:53:49 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:49.723423 | orchestrator | 2025-05-26 04:53:49 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:49.723575 | orchestrator | 2025-05-26 04:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:52.766780 | orchestrator | 2025-05-26 04:53:52 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:52.767030 | orchestrator | 2025-05-26 04:53:52 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:52.768012 | orchestrator | 2025-05-26 04:53:52 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:52.768205 | orchestrator | 2025-05-26 04:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:55.824609 | orchestrator | 2025-05-26 04:53:55 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:55.825438 | orchestrator | 2025-05-26 04:53:55 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:55.827563 | orchestrator | 2025-05-26 04:53:55 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:55.827803 | orchestrator | 2025-05-26 04:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:53:58.880679 | orchestrator | 2025-05-26 04:53:58 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:53:58.882994 | orchestrator | 2025-05-26 04:53:58 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:53:58.885053 | orchestrator | 2025-05-26 04:53:58 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:53:58.885299 | orchestrator | 2025-05-26 04:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:01.932946 | orchestrator | 2025-05-26 04:54:01 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:01.934080 | orchestrator | 2025-05-26 04:54:01 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:01.936580 | orchestrator | 2025-05-26 04:54:01 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:01.936635 | orchestrator | 2025-05-26 04:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:04.985168 | orchestrator | 2025-05-26 04:54:04 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:04.985281 | orchestrator | 2025-05-26 04:54:04 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:04.986324 | orchestrator | 2025-05-26 04:54:04 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:04.986351 | orchestrator | 2025-05-26 04:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:08.030797 | orchestrator | 2025-05-26 04:54:08 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:08.031997 | orchestrator | 2025-05-26 04:54:08 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:08.033658 | orchestrator | 2025-05-26 04:54:08 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:08.033728 | orchestrator | 2025-05-26 04:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:11.080883 | orchestrator | 2025-05-26 04:54:11 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:11.081899 | orchestrator | 2025-05-26 04:54:11 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:11.083017 | orchestrator | 2025-05-26 04:54:11 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:11.083045 | orchestrator | 2025-05-26 04:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:14.119304 | orchestrator | 2025-05-26 04:54:14 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:14.119786 | orchestrator | 2025-05-26 04:54:14 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:14.120986 | orchestrator | 2025-05-26 04:54:14 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:14.121014 | orchestrator | 2025-05-26 04:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:17.161737 | orchestrator | 2025-05-26 04:54:17 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:17.161828 | orchestrator | 2025-05-26 04:54:17 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:17.161849 | orchestrator | 2025-05-26 04:54:17 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:17.161866 | orchestrator | 2025-05-26 04:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:20.211526 | orchestrator | 2025-05-26 04:54:20 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:20.212514 | orchestrator | 2025-05-26 04:54:20 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:20.213863 | orchestrator | 2025-05-26 04:54:20 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:20.213917 | orchestrator | 2025-05-26 04:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:23.258422 | orchestrator | 2025-05-26 04:54:23 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:23.258905 | orchestrator | 2025-05-26 04:54:23 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:23.261542 | orchestrator | 2025-05-26 04:54:23 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:23.261626 | orchestrator | 2025-05-26 04:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:26.304138 | orchestrator | 2025-05-26 04:54:26 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:26.305831 | orchestrator | 2025-05-26 04:54:26 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:26.305881 | orchestrator | 2025-05-26 04:54:26 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:26.305895 | orchestrator | 2025-05-26 04:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:29.371254 | orchestrator | 2025-05-26 04:54:29 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:29.371371 | orchestrator | 2025-05-26 04:54:29 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:29.371798 | orchestrator | 2025-05-26 04:54:29 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:29.371848 | orchestrator | 2025-05-26 04:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:32.416092 | orchestrator | 2025-05-26 04:54:32 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:32.418399 | orchestrator | 2025-05-26 04:54:32 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:32.419659 | orchestrator | 2025-05-26 04:54:32 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:32.419998 | orchestrator | 2025-05-26 04:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:35.469995 | orchestrator | 2025-05-26 04:54:35 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:35.470798 | orchestrator | 2025-05-26 04:54:35 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:35.471935 | orchestrator | 2025-05-26 04:54:35 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:35.471960 | orchestrator | 2025-05-26 04:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:38.512548 | orchestrator | 2025-05-26 04:54:38 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:38.513857 | orchestrator | 2025-05-26 04:54:38 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:38.515314 | orchestrator | 2025-05-26 04:54:38 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:38.515353 | orchestrator | 2025-05-26 04:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:41.559519 | orchestrator | 2025-05-26 04:54:41 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:41.559613 | orchestrator | 2025-05-26 04:54:41 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:41.559624 | orchestrator | 2025-05-26 04:54:41 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state STARTED 2025-05-26 04:54:41.559633 | orchestrator | 2025-05-26 04:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:44.602606 | orchestrator | 2025-05-26 04:54:44 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:44.606093 | orchestrator | 2025-05-26 04:54:44 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:44.608128 | orchestrator | 2025-05-26 04:54:44 | INFO  | Task 2243799f-0ad3-4e63-86c9-eaeb184a60c7 is in state SUCCESS 2025-05-26 04:54:44.608417 | orchestrator | 2025-05-26 04:54:44.611918 | orchestrator | 2025-05-26 04:54:44.611966 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 04:54:44.611980 | orchestrator | 2025-05-26 04:54:44.611993 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-26 04:54:44.612004 | orchestrator | Monday 26 May 2025 04:52:15 +0000 (0:00:00.350) 0:00:00.350 ************ 2025-05-26 04:54:44.612016 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.612028 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.612039 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.612051 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:54:44.612062 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:54:44.612073 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:54:44.612103 | orchestrator | 2025-05-26 04:54:44.612115 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-26 04:54:44.612126 | orchestrator | Monday 26 May 2025 04:52:16 +0000 (0:00:01.034) 0:00:01.384 ************ 2025-05-26 04:54:44.612138 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-26 04:54:44.612149 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-26 04:54:44.612161 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-26 04:54:44.612172 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-26 04:54:44.612183 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-26 04:54:44.612195 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-26 04:54:44.612206 | orchestrator | 2025-05-26 04:54:44.612217 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-26 04:54:44.612229 | orchestrator | 2025-05-26 04:54:44.612240 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-26 04:54:44.612251 | orchestrator | Monday 26 May 2025 04:52:17 +0000 (0:00:00.989) 0:00:02.374 ************ 2025-05-26 04:54:44.612263 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-26 04:54:44.612276 | orchestrator | 2025-05-26 04:54:44.612287 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-26 04:54:44.612299 | orchestrator | Monday 26 May 2025 04:52:18 +0000 (0:00:01.446) 0:00:03.821 ************ 2025-05-26 04:54:44.612320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612395 | orchestrator | 2025-05-26 04:54:44.612420 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-26 04:54:44.612432 | orchestrator | Monday 26 May 2025 04:52:20 +0000 (0:00:01.745) 0:00:05.567 ************ 2025-05-26 04:54:44.612443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612540 | orchestrator | 2025-05-26 04:54:44.612551 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-26 04:54:44.612561 | orchestrator | Monday 26 May 2025 04:52:22 +0000 (0:00:01.727) 0:00:07.294 ************ 2025-05-26 04:54:44.612572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612637 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612659 | orchestrator | 2025-05-26 04:54:44.612670 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-26 04:54:44.612680 | orchestrator | Monday 26 May 2025 04:52:23 +0000 (0:00:01.135) 0:00:08.430 ************ 2025-05-26 04:54:44.612695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612767 | orchestrator | 2025-05-26 04:54:44.612782 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-26 04:54:44.612793 | orchestrator | Monday 26 May 2025 04:52:25 +0000 (0:00:02.122) 0:00:10.553 ************ 2025-05-26 04:54:44.612804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.612874 | orchestrator | 2025-05-26 04:54:44.612885 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-26 04:54:44.612901 | orchestrator | Monday 26 May 2025 04:52:27 +0000 (0:00:02.336) 0:00:12.889 ************ 2025-05-26 04:54:44.612912 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.612923 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.612933 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.612944 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:54:44.612954 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:54:44.612965 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:54:44.612975 | orchestrator | 2025-05-26 04:54:44.612986 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-26 04:54:44.612997 | orchestrator | Monday 26 May 2025 04:52:30 +0000 (0:00:02.703) 0:00:15.592 ************ 2025-05-26 04:54:44.613007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-26 04:54:44.613018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-26 04:54:44.613028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-26 04:54:44.613039 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-26 04:54:44.613049 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-26 04:54:44.613059 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-26 04:54:44.613070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-26 04:54:44.613080 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-26 04:54:44.613095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-26 04:54:44.613106 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-26 04:54:44.613117 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-26 04:54:44.613127 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-26 04:54:44.613138 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-26 04:54:44.613150 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-26 04:54:44.613161 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-26 04:54:44.613171 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-26 04:54:44.613182 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-26 04:54:44.613193 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-26 04:54:44.613203 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-26 04:54:44.613215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-26 04:54:44.613226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-26 04:54:44.613236 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-26 04:54:44.613246 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-26 04:54:44.613257 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-26 04:54:44.613277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-26 04:54:44.613287 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-26 04:54:44.613298 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-26 04:54:44.613308 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-26 04:54:44.613319 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-26 04:54:44.613329 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-26 04:54:44.613340 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-26 04:54:44.613351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-26 04:54:44.613361 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-26 04:54:44.613372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-26 04:54:44.613382 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-26 04:54:44.613393 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-26 04:54:44.613403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-26 04:54:44.613414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-26 04:54:44.613424 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-26 04:54:44.613435 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-26 04:54:44.613466 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-26 04:54:44.613485 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-26 04:54:44.613504 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-26 04:54:44.613518 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-26 04:54:44.613535 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-26 04:54:44.613547 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-26 04:54:44.613558 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-26 04:54:44.613568 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-26 04:54:44.613579 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-26 04:54:44.613589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-26 04:54:44.613600 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-26 04:54:44.613610 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-26 04:54:44.613628 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-26 04:54:44.613639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-26 04:54:44.613650 | orchestrator | 2025-05-26 04:54:44.613661 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-26 04:54:44.613671 | orchestrator | Monday 26 May 2025 04:52:48 +0000 (0:00:18.347) 0:00:33.940 ************ 2025-05-26 04:54:44.613682 | orchestrator | 2025-05-26 04:54:44.613693 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-26 04:54:44.613703 | orchestrator | Monday 26 May 2025 04:52:49 +0000 (0:00:00.171) 0:00:34.111 ************ 2025-05-26 04:54:44.613714 | orchestrator | 2025-05-26 04:54:44.613725 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-26 04:54:44.613735 | orchestrator | Monday 26 May 2025 04:52:49 +0000 (0:00:00.130) 0:00:34.242 ************ 2025-05-26 04:54:44.613746 | orchestrator | 2025-05-26 04:54:44.613756 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-26 04:54:44.613767 | orchestrator | Monday 26 May 2025 04:52:49 +0000 (0:00:00.074) 0:00:34.316 ************ 2025-05-26 04:54:44.613778 | orchestrator | 2025-05-26 04:54:44.613793 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-26 04:54:44.613804 | orchestrator | Monday 26 May 2025 04:52:49 +0000 (0:00:00.170) 0:00:34.487 ************ 2025-05-26 04:54:44.613815 | orchestrator | 2025-05-26 04:54:44.613825 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-26 04:54:44.613836 | orchestrator | Monday 26 May 2025 04:52:49 +0000 (0:00:00.182) 0:00:34.669 ************ 2025-05-26 04:54:44.613846 | orchestrator | 2025-05-26 04:54:44.613857 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-26 04:54:44.613868 | orchestrator | Monday 26 May 2025 04:52:49 +0000 (0:00:00.243) 0:00:34.912 ************ 2025-05-26 04:54:44.613878 | orchestrator | ok: [testbed-node-3] 2025-05-26 04:54:44.613889 | orchestrator | ok: [testbed-node-4] 2025-05-26 04:54:44.613899 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.613910 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.613921 | orchestrator | ok: [testbed-node-5] 2025-05-26 04:54:44.613931 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.613941 | orchestrator | 2025-05-26 04:54:44.613952 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-26 04:54:44.613963 | orchestrator | Monday 26 May 2025 04:52:52 +0000 (0:00:02.354) 0:00:37.267 ************ 2025-05-26 04:54:44.613974 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.613984 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.613995 | orchestrator | changed: [testbed-node-5] 2025-05-26 04:54:44.614005 | orchestrator | changed: [testbed-node-4] 2025-05-26 04:54:44.614082 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.614098 | orchestrator | changed: [testbed-node-3] 2025-05-26 04:54:44.614109 | orchestrator | 2025-05-26 04:54:44.614119 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-26 04:54:44.614130 | orchestrator | 2025-05-26 04:54:44.614141 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-26 04:54:44.614151 | orchestrator | Monday 26 May 2025 04:53:26 +0000 (0:00:34.239) 0:01:11.506 ************ 2025-05-26 04:54:44.614162 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:54:44.614172 | orchestrator | 2025-05-26 04:54:44.614183 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-26 04:54:44.614193 | orchestrator | Monday 26 May 2025 04:53:26 +0000 (0:00:00.543) 0:01:12.050 ************ 2025-05-26 04:54:44.614204 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:54:44.614214 | orchestrator | 2025-05-26 04:54:44.614225 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-26 04:54:44.614242 | orchestrator | Monday 26 May 2025 04:53:27 +0000 (0:00:00.692) 0:01:12.742 ************ 2025-05-26 04:54:44.614253 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.614263 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.614274 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.614284 | orchestrator | 2025-05-26 04:54:44.614295 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-26 04:54:44.614306 | orchestrator | Monday 26 May 2025 04:53:28 +0000 (0:00:00.832) 0:01:13.575 ************ 2025-05-26 04:54:44.614319 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.614338 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.614355 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.614372 | orchestrator | 2025-05-26 04:54:44.614402 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-26 04:54:44.614421 | orchestrator | Monday 26 May 2025 04:53:28 +0000 (0:00:00.373) 0:01:13.949 ************ 2025-05-26 04:54:44.614442 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.614483 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.614503 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.614521 | orchestrator | 2025-05-26 04:54:44.614536 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-26 04:54:44.614547 | orchestrator | Monday 26 May 2025 04:53:29 +0000 (0:00:00.321) 0:01:14.270 ************ 2025-05-26 04:54:44.614560 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.614579 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.614597 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.614616 | orchestrator | 2025-05-26 04:54:44.614635 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-26 04:54:44.614655 | orchestrator | Monday 26 May 2025 04:53:29 +0000 (0:00:00.516) 0:01:14.787 ************ 2025-05-26 04:54:44.614675 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.614694 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.614712 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.614726 | orchestrator | 2025-05-26 04:54:44.614737 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-26 04:54:44.614748 | orchestrator | Monday 26 May 2025 04:53:30 +0000 (0:00:00.573) 0:01:15.360 ************ 2025-05-26 04:54:44.614758 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.614769 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.614780 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.614790 | orchestrator | 2025-05-26 04:54:44.614801 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-26 04:54:44.614812 | orchestrator | Monday 26 May 2025 04:53:30 +0000 (0:00:00.455) 0:01:15.816 ************ 2025-05-26 04:54:44.614822 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.614833 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.614843 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.614854 | orchestrator | 2025-05-26 04:54:44.614865 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-26 04:54:44.614875 | orchestrator | Monday 26 May 2025 04:53:31 +0000 (0:00:00.528) 0:01:16.344 ************ 2025-05-26 04:54:44.614886 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.614897 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.614907 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.614918 | orchestrator | 2025-05-26 04:54:44.614928 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-26 04:54:44.614939 | orchestrator | Monday 26 May 2025 04:53:32 +0000 (0:00:00.745) 0:01:17.089 ************ 2025-05-26 04:54:44.614949 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.614960 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.614970 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.614981 | orchestrator | 2025-05-26 04:54:44.615002 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-26 04:54:44.615013 | orchestrator | Monday 26 May 2025 04:53:32 +0000 (0:00:00.293) 0:01:17.383 ************ 2025-05-26 04:54:44.615060 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615071 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615082 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615092 | orchestrator | 2025-05-26 04:54:44.615103 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-26 04:54:44.615114 | orchestrator | Monday 26 May 2025 04:53:32 +0000 (0:00:00.325) 0:01:17.708 ************ 2025-05-26 04:54:44.615124 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615135 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615146 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615156 | orchestrator | 2025-05-26 04:54:44.615167 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-26 04:54:44.615178 | orchestrator | Monday 26 May 2025 04:53:32 +0000 (0:00:00.226) 0:01:17.935 ************ 2025-05-26 04:54:44.615188 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615199 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615209 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615220 | orchestrator | 2025-05-26 04:54:44.615231 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-26 04:54:44.615242 | orchestrator | Monday 26 May 2025 04:53:33 +0000 (0:00:00.392) 0:01:18.328 ************ 2025-05-26 04:54:44.615252 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615263 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615273 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615284 | orchestrator | 2025-05-26 04:54:44.615294 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-26 04:54:44.615305 | orchestrator | Monday 26 May 2025 04:53:33 +0000 (0:00:00.272) 0:01:18.600 ************ 2025-05-26 04:54:44.615315 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615326 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615336 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615347 | orchestrator | 2025-05-26 04:54:44.615358 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-26 04:54:44.615368 | orchestrator | Monday 26 May 2025 04:53:33 +0000 (0:00:00.251) 0:01:18.851 ************ 2025-05-26 04:54:44.615379 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615389 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615400 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615410 | orchestrator | 2025-05-26 04:54:44.615421 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-26 04:54:44.615432 | orchestrator | Monday 26 May 2025 04:53:34 +0000 (0:00:00.254) 0:01:19.106 ************ 2025-05-26 04:54:44.615443 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615485 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615496 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615507 | orchestrator | 2025-05-26 04:54:44.615518 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-26 04:54:44.615529 | orchestrator | Monday 26 May 2025 04:53:34 +0000 (0:00:00.412) 0:01:19.518 ************ 2025-05-26 04:54:44.615540 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.615557 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.615588 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.615607 | orchestrator | 2025-05-26 04:54:44.615626 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-26 04:54:44.615645 | orchestrator | Monday 26 May 2025 04:53:34 +0000 (0:00:00.253) 0:01:19.771 ************ 2025-05-26 04:54:44.615665 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:54:44.615686 | orchestrator | 2025-05-26 04:54:44.615706 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-26 04:54:44.615724 | orchestrator | Monday 26 May 2025 04:53:35 +0000 (0:00:00.471) 0:01:20.243 ************ 2025-05-26 04:54:44.615743 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.615775 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.615796 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.615816 | orchestrator | 2025-05-26 04:54:44.615837 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-26 04:54:44.615855 | orchestrator | Monday 26 May 2025 04:53:35 +0000 (0:00:00.648) 0:01:20.891 ************ 2025-05-26 04:54:44.615874 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.615885 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.615896 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.615974 | orchestrator | 2025-05-26 04:54:44.615986 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-26 04:54:44.615997 | orchestrator | Monday 26 May 2025 04:53:36 +0000 (0:00:00.377) 0:01:21.268 ************ 2025-05-26 04:54:44.616008 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.616018 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.616029 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.616040 | orchestrator | 2025-05-26 04:54:44.616051 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-26 04:54:44.616062 | orchestrator | Monday 26 May 2025 04:53:36 +0000 (0:00:00.313) 0:01:21.582 ************ 2025-05-26 04:54:44.616072 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.616083 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.616093 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.616104 | orchestrator | 2025-05-26 04:54:44.616115 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-26 04:54:44.616125 | orchestrator | Monday 26 May 2025 04:53:36 +0000 (0:00:00.283) 0:01:21.866 ************ 2025-05-26 04:54:44.616136 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.616147 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.616157 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.616168 | orchestrator | 2025-05-26 04:54:44.616178 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-26 04:54:44.616189 | orchestrator | Monday 26 May 2025 04:53:37 +0000 (0:00:00.396) 0:01:22.262 ************ 2025-05-26 04:54:44.616200 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.616216 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.616228 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.616238 | orchestrator | 2025-05-26 04:54:44.616249 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-26 04:54:44.616260 | orchestrator | Monday 26 May 2025 04:53:37 +0000 (0:00:00.290) 0:01:22.552 ************ 2025-05-26 04:54:44.616270 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.616281 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.616292 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.616302 | orchestrator | 2025-05-26 04:54:44.616313 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-26 04:54:44.616324 | orchestrator | Monday 26 May 2025 04:53:37 +0000 (0:00:00.303) 0:01:22.856 ************ 2025-05-26 04:54:44.616334 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.616345 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.616356 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.616366 | orchestrator | 2025-05-26 04:54:44.616377 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-26 04:54:44.616388 | orchestrator | Monday 26 May 2025 04:53:38 +0000 (0:00:00.321) 0:01:23.177 ************ 2025-05-26 04:54:44.616399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616699 | orchestrator | 2025-05-26 04:54:44.616719 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-26 04:54:44.616740 | orchestrator | Monday 26 May 2025 04:53:39 +0000 (0:00:01.707) 0:01:24.885 ************ 2025-05-26 04:54:44.616762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616885 | orchestrator | 2025-05-26 04:54:44.616895 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-26 04:54:44.616904 | orchestrator | Monday 26 May 2025 04:53:44 +0000 (0:00:04.306) 0:01:29.192 ************ 2025-05-26 04:54:44.616914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.616996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617035 | orchestrator | 2025-05-26 04:54:44.617045 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-26 04:54:44.617054 | orchestrator | Monday 26 May 2025 04:53:46 +0000 (0:00:02.187) 0:01:31.380 ************ 2025-05-26 04:54:44.617064 | orchestrator | 2025-05-26 04:54:44.617073 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-26 04:54:44.617083 | orchestrator | Monday 26 May 2025 04:53:46 +0000 (0:00:00.068) 0:01:31.448 ************ 2025-05-26 04:54:44.617092 | orchestrator | 2025-05-26 04:54:44.617101 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-26 04:54:44.617111 | orchestrator | Monday 26 May 2025 04:53:46 +0000 (0:00:00.066) 0:01:31.515 ************ 2025-05-26 04:54:44.617120 | orchestrator | 2025-05-26 04:54:44.617129 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-26 04:54:44.617142 | orchestrator | Monday 26 May 2025 04:53:46 +0000 (0:00:00.068) 0:01:31.584 ************ 2025-05-26 04:54:44.617157 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.617167 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.617176 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.617185 | orchestrator | 2025-05-26 04:54:44.617195 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-26 04:54:44.617204 | orchestrator | Monday 26 May 2025 04:53:53 +0000 (0:00:07.434) 0:01:39.018 ************ 2025-05-26 04:54:44.617214 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.617223 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.617232 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.617241 | orchestrator | 2025-05-26 04:54:44.617251 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-26 04:54:44.617260 | orchestrator | Monday 26 May 2025 04:53:57 +0000 (0:00:03.256) 0:01:42.274 ************ 2025-05-26 04:54:44.617269 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.617279 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.617288 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.617297 | orchestrator | 2025-05-26 04:54:44.617306 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-26 04:54:44.617316 | orchestrator | Monday 26 May 2025 04:54:04 +0000 (0:00:07.710) 0:01:49.985 ************ 2025-05-26 04:54:44.617325 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.617334 | orchestrator | 2025-05-26 04:54:44.617344 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-26 04:54:44.617353 | orchestrator | Monday 26 May 2025 04:54:05 +0000 (0:00:00.136) 0:01:50.122 ************ 2025-05-26 04:54:44.617362 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.617371 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.617381 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.617390 | orchestrator | 2025-05-26 04:54:44.617399 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-26 04:54:44.617409 | orchestrator | Monday 26 May 2025 04:54:06 +0000 (0:00:00.950) 0:01:51.072 ************ 2025-05-26 04:54:44.617418 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.617427 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.617437 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.617471 | orchestrator | 2025-05-26 04:54:44.617482 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-26 04:54:44.617492 | orchestrator | Monday 26 May 2025 04:54:06 +0000 (0:00:00.923) 0:01:51.995 ************ 2025-05-26 04:54:44.617501 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.617511 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.617520 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.617529 | orchestrator | 2025-05-26 04:54:44.617538 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-26 04:54:44.617548 | orchestrator | Monday 26 May 2025 04:54:07 +0000 (0:00:00.813) 0:01:52.809 ************ 2025-05-26 04:54:44.617557 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.617567 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.617576 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.617585 | orchestrator | 2025-05-26 04:54:44.617595 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-26 04:54:44.617604 | orchestrator | Monday 26 May 2025 04:54:08 +0000 (0:00:00.633) 0:01:53.442 ************ 2025-05-26 04:54:44.617614 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.617631 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.617646 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.617656 | orchestrator | 2025-05-26 04:54:44.617666 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-26 04:54:44.617675 | orchestrator | Monday 26 May 2025 04:54:09 +0000 (0:00:00.747) 0:01:54.190 ************ 2025-05-26 04:54:44.617684 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.617694 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.617703 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.617718 | orchestrator | 2025-05-26 04:54:44.617728 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-26 04:54:44.617737 | orchestrator | Monday 26 May 2025 04:54:10 +0000 (0:00:01.239) 0:01:55.430 ************ 2025-05-26 04:54:44.617747 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.617756 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.617765 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.617774 | orchestrator | 2025-05-26 04:54:44.617784 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-26 04:54:44.617794 | orchestrator | Monday 26 May 2025 04:54:10 +0000 (0:00:00.354) 0:01:55.784 ************ 2025-05-26 04:54:44.617803 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617814 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617824 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617837 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617849 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617858 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617893 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617909 | orchestrator | 2025-05-26 04:54:44.617919 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-26 04:54:44.617928 | orchestrator | Monday 26 May 2025 04:54:12 +0000 (0:00:01.387) 0:01:57.172 ************ 2025-05-26 04:54:44.617938 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617958 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617967 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.617991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618001 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618070 | orchestrator | 2025-05-26 04:54:44.618080 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-26 04:54:44.618089 | orchestrator | Monday 26 May 2025 04:54:16 +0000 (0:00:04.079) 0:02:01.252 ************ 2025-05-26 04:54:44.618105 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618115 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618125 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618144 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618178 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-26 04:54:44.618202 | orchestrator | 2025-05-26 04:54:44.618212 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-26 04:54:44.618221 | orchestrator | Monday 26 May 2025 04:54:18 +0000 (0:00:02.719) 0:02:03.971 ************ 2025-05-26 04:54:44.618231 | orchestrator | 2025-05-26 04:54:44.618240 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-26 04:54:44.618250 | orchestrator | Monday 26 May 2025 04:54:18 +0000 (0:00:00.066) 0:02:04.038 ************ 2025-05-26 04:54:44.618259 | orchestrator | 2025-05-26 04:54:44.618269 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-26 04:54:44.618278 | orchestrator | Monday 26 May 2025 04:54:19 +0000 (0:00:00.088) 0:02:04.126 ************ 2025-05-26 04:54:44.618287 | orchestrator | 2025-05-26 04:54:44.618297 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-26 04:54:44.618306 | orchestrator | Monday 26 May 2025 04:54:19 +0000 (0:00:00.065) 0:02:04.191 ************ 2025-05-26 04:54:44.618315 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.618325 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.618334 | orchestrator | 2025-05-26 04:54:44.618348 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-26 04:54:44.618358 | orchestrator | Monday 26 May 2025 04:54:25 +0000 (0:00:06.121) 0:02:10.313 ************ 2025-05-26 04:54:44.618367 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.618377 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.618386 | orchestrator | 2025-05-26 04:54:44.618396 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-26 04:54:44.618406 | orchestrator | Monday 26 May 2025 04:54:31 +0000 (0:00:06.379) 0:02:16.693 ************ 2025-05-26 04:54:44.618415 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:54:44.618424 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:54:44.618433 | orchestrator | 2025-05-26 04:54:44.618443 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-26 04:54:44.618517 | orchestrator | Monday 26 May 2025 04:54:37 +0000 (0:00:06.214) 0:02:22.907 ************ 2025-05-26 04:54:44.618529 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:54:44.618538 | orchestrator | 2025-05-26 04:54:44.618548 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-26 04:54:44.618557 | orchestrator | Monday 26 May 2025 04:54:38 +0000 (0:00:00.152) 0:02:23.060 ************ 2025-05-26 04:54:44.618567 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.618576 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.618585 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.618595 | orchestrator | 2025-05-26 04:54:44.618604 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-26 04:54:44.618613 | orchestrator | Monday 26 May 2025 04:54:39 +0000 (0:00:01.012) 0:02:24.072 ************ 2025-05-26 04:54:44.618623 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.618632 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.618641 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.618651 | orchestrator | 2025-05-26 04:54:44.618660 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-26 04:54:44.618670 | orchestrator | Monday 26 May 2025 04:54:39 +0000 (0:00:00.639) 0:02:24.712 ************ 2025-05-26 04:54:44.618679 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.618688 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.618698 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.618707 | orchestrator | 2025-05-26 04:54:44.618717 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-26 04:54:44.618726 | orchestrator | Monday 26 May 2025 04:54:40 +0000 (0:00:00.871) 0:02:25.584 ************ 2025-05-26 04:54:44.618735 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:54:44.618745 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:54:44.618764 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:54:44.618773 | orchestrator | 2025-05-26 04:54:44.618783 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-26 04:54:44.618792 | orchestrator | Monday 26 May 2025 04:54:41 +0000 (0:00:00.640) 0:02:26.224 ************ 2025-05-26 04:54:44.618806 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.618816 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.618826 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.618835 | orchestrator | 2025-05-26 04:54:44.618845 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-26 04:54:44.618854 | orchestrator | Monday 26 May 2025 04:54:42 +0000 (0:00:01.118) 0:02:27.343 ************ 2025-05-26 04:54:44.618864 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:54:44.618873 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:54:44.618882 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:54:44.618891 | orchestrator | 2025-05-26 04:54:44.618901 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:54:44.618909 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-26 04:54:44.618917 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-26 04:54:44.618927 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-26 04:54:44.618941 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:54:44.618950 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:54:44.618958 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-26 04:54:44.618965 | orchestrator | 2025-05-26 04:54:44.618973 | orchestrator | 2025-05-26 04:54:44.618981 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:54:44.618989 | orchestrator | Monday 26 May 2025 04:54:43 +0000 (0:00:00.886) 0:02:28.229 ************ 2025-05-26 04:54:44.618996 | orchestrator | =============================================================================== 2025-05-26 04:54:44.619004 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.24s 2025-05-26 04:54:44.619012 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.35s 2025-05-26 04:54:44.619019 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.92s 2025-05-26 04:54:44.619027 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.56s 2025-05-26 04:54:44.619035 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.64s 2025-05-26 04:54:44.619042 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.31s 2025-05-26 04:54:44.619050 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.08s 2025-05-26 04:54:44.619062 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.72s 2025-05-26 04:54:44.619070 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.70s 2025-05-26 04:54:44.619078 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.35s 2025-05-26 04:54:44.619086 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.34s 2025-05-26 04:54:44.619093 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.19s 2025-05-26 04:54:44.619101 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.12s 2025-05-26 04:54:44.619109 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.75s 2025-05-26 04:54:44.619122 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.73s 2025-05-26 04:54:44.619129 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.71s 2025-05-26 04:54:44.619137 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.45s 2025-05-26 04:54:44.619145 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-05-26 04:54:44.619152 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.24s 2025-05-26 04:54:44.619160 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.14s 2025-05-26 04:54:44.619168 | orchestrator | 2025-05-26 04:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:47.680383 | orchestrator | 2025-05-26 04:54:47 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:47.684536 | orchestrator | 2025-05-26 04:54:47 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:47.684578 | orchestrator | 2025-05-26 04:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:50.737141 | orchestrator | 2025-05-26 04:54:50 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:50.737662 | orchestrator | 2025-05-26 04:54:50 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:50.737782 | orchestrator | 2025-05-26 04:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:53.782864 | orchestrator | 2025-05-26 04:54:53 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:53.787879 | orchestrator | 2025-05-26 04:54:53 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:53.787954 | orchestrator | 2025-05-26 04:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:56.829335 | orchestrator | 2025-05-26 04:54:56 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:56.829848 | orchestrator | 2025-05-26 04:54:56 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:56.829879 | orchestrator | 2025-05-26 04:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:54:59.874263 | orchestrator | 2025-05-26 04:54:59 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:54:59.875522 | orchestrator | 2025-05-26 04:54:59 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:54:59.875557 | orchestrator | 2025-05-26 04:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:02.922121 | orchestrator | 2025-05-26 04:55:02 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:02.925368 | orchestrator | 2025-05-26 04:55:02 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:02.925420 | orchestrator | 2025-05-26 04:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:05.979933 | orchestrator | 2025-05-26 04:55:05 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:05.980274 | orchestrator | 2025-05-26 04:55:05 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:05.980304 | orchestrator | 2025-05-26 04:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:09.028301 | orchestrator | 2025-05-26 04:55:09 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:09.028392 | orchestrator | 2025-05-26 04:55:09 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:09.028407 | orchestrator | 2025-05-26 04:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:12.078088 | orchestrator | 2025-05-26 04:55:12 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:12.079786 | orchestrator | 2025-05-26 04:55:12 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:12.079905 | orchestrator | 2025-05-26 04:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:15.130538 | orchestrator | 2025-05-26 04:55:15 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:15.132464 | orchestrator | 2025-05-26 04:55:15 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:15.132698 | orchestrator | 2025-05-26 04:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:18.192002 | orchestrator | 2025-05-26 04:55:18 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:18.194180 | orchestrator | 2025-05-26 04:55:18 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:18.194340 | orchestrator | 2025-05-26 04:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:21.237317 | orchestrator | 2025-05-26 04:55:21 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:21.237782 | orchestrator | 2025-05-26 04:55:21 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:21.237812 | orchestrator | 2025-05-26 04:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:24.295459 | orchestrator | 2025-05-26 04:55:24 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:24.295555 | orchestrator | 2025-05-26 04:55:24 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:24.296048 | orchestrator | 2025-05-26 04:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:27.346372 | orchestrator | 2025-05-26 04:55:27 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:27.347854 | orchestrator | 2025-05-26 04:55:27 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:27.347898 | orchestrator | 2025-05-26 04:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:30.414794 | orchestrator | 2025-05-26 04:55:30 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:30.415996 | orchestrator | 2025-05-26 04:55:30 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:30.416032 | orchestrator | 2025-05-26 04:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:33.480750 | orchestrator | 2025-05-26 04:55:33 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:33.481818 | orchestrator | 2025-05-26 04:55:33 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:33.482091 | orchestrator | 2025-05-26 04:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:36.521945 | orchestrator | 2025-05-26 04:55:36 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:36.522811 | orchestrator | 2025-05-26 04:55:36 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:36.522848 | orchestrator | 2025-05-26 04:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:39.569664 | orchestrator | 2025-05-26 04:55:39 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:39.570082 | orchestrator | 2025-05-26 04:55:39 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:39.570142 | orchestrator | 2025-05-26 04:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:42.619668 | orchestrator | 2025-05-26 04:55:42 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:42.622304 | orchestrator | 2025-05-26 04:55:42 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:42.622409 | orchestrator | 2025-05-26 04:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:45.668205 | orchestrator | 2025-05-26 04:55:45 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:45.670365 | orchestrator | 2025-05-26 04:55:45 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:45.670491 | orchestrator | 2025-05-26 04:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:48.724042 | orchestrator | 2025-05-26 04:55:48 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:48.725824 | orchestrator | 2025-05-26 04:55:48 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:48.726466 | orchestrator | 2025-05-26 04:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:51.774164 | orchestrator | 2025-05-26 04:55:51 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:51.775847 | orchestrator | 2025-05-26 04:55:51 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:51.775881 | orchestrator | 2025-05-26 04:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:54.835759 | orchestrator | 2025-05-26 04:55:54 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:54.836064 | orchestrator | 2025-05-26 04:55:54 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:54.836082 | orchestrator | 2025-05-26 04:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:55:57.899563 | orchestrator | 2025-05-26 04:55:57 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:55:57.900566 | orchestrator | 2025-05-26 04:55:57 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:55:57.900609 | orchestrator | 2025-05-26 04:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:00.938102 | orchestrator | 2025-05-26 04:56:00 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:00.939514 | orchestrator | 2025-05-26 04:56:00 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:00.939548 | orchestrator | 2025-05-26 04:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:03.993789 | orchestrator | 2025-05-26 04:56:03 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:03.995428 | orchestrator | 2025-05-26 04:56:03 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:03.995461 | orchestrator | 2025-05-26 04:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:07.062738 | orchestrator | 2025-05-26 04:56:07 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:07.063775 | orchestrator | 2025-05-26 04:56:07 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:07.063828 | orchestrator | 2025-05-26 04:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:10.122915 | orchestrator | 2025-05-26 04:56:10 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:10.124498 | orchestrator | 2025-05-26 04:56:10 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:10.124993 | orchestrator | 2025-05-26 04:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:13.168608 | orchestrator | 2025-05-26 04:56:13 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:13.168698 | orchestrator | 2025-05-26 04:56:13 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:13.168709 | orchestrator | 2025-05-26 04:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:16.219628 | orchestrator | 2025-05-26 04:56:16 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:16.219741 | orchestrator | 2025-05-26 04:56:16 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:16.219762 | orchestrator | 2025-05-26 04:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:19.291704 | orchestrator | 2025-05-26 04:56:19 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:19.292756 | orchestrator | 2025-05-26 04:56:19 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:19.293353 | orchestrator | 2025-05-26 04:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:22.339241 | orchestrator | 2025-05-26 04:56:22 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:22.340362 | orchestrator | 2025-05-26 04:56:22 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:22.341256 | orchestrator | 2025-05-26 04:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:25.401277 | orchestrator | 2025-05-26 04:56:25 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:25.402315 | orchestrator | 2025-05-26 04:56:25 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:25.402362 | orchestrator | 2025-05-26 04:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:28.459482 | orchestrator | 2025-05-26 04:56:28 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:28.459955 | orchestrator | 2025-05-26 04:56:28 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:28.459989 | orchestrator | 2025-05-26 04:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:31.499559 | orchestrator | 2025-05-26 04:56:31 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:31.500355 | orchestrator | 2025-05-26 04:56:31 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:31.502493 | orchestrator | 2025-05-26 04:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:34.553812 | orchestrator | 2025-05-26 04:56:34 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:34.555346 | orchestrator | 2025-05-26 04:56:34 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:34.555387 | orchestrator | 2025-05-26 04:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:37.595630 | orchestrator | 2025-05-26 04:56:37 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:37.595764 | orchestrator | 2025-05-26 04:56:37 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:37.595779 | orchestrator | 2025-05-26 04:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:40.634932 | orchestrator | 2025-05-26 04:56:40 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:40.637365 | orchestrator | 2025-05-26 04:56:40 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:40.637409 | orchestrator | 2025-05-26 04:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:43.680696 | orchestrator | 2025-05-26 04:56:43 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:43.681926 | orchestrator | 2025-05-26 04:56:43 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:43.681986 | orchestrator | 2025-05-26 04:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:46.735566 | orchestrator | 2025-05-26 04:56:46 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:46.736158 | orchestrator | 2025-05-26 04:56:46 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:46.736192 | orchestrator | 2025-05-26 04:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:49.800162 | orchestrator | 2025-05-26 04:56:49 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:49.802585 | orchestrator | 2025-05-26 04:56:49 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:49.802628 | orchestrator | 2025-05-26 04:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:52.867633 | orchestrator | 2025-05-26 04:56:52 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:52.869325 | orchestrator | 2025-05-26 04:56:52 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:52.869368 | orchestrator | 2025-05-26 04:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:55.939199 | orchestrator | 2025-05-26 04:56:55 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:55.941706 | orchestrator | 2025-05-26 04:56:55 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:55.941756 | orchestrator | 2025-05-26 04:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:56:59.041221 | orchestrator | 2025-05-26 04:56:59 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:56:59.042866 | orchestrator | 2025-05-26 04:56:59 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:56:59.042924 | orchestrator | 2025-05-26 04:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:02.100631 | orchestrator | 2025-05-26 04:57:02 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:02.105398 | orchestrator | 2025-05-26 04:57:02 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:02.105438 | orchestrator | 2025-05-26 04:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:05.169974 | orchestrator | 2025-05-26 04:57:05 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:05.170194 | orchestrator | 2025-05-26 04:57:05 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:05.170730 | orchestrator | 2025-05-26 04:57:05 | INFO  | Task 76daabfd-572a-4002-ac8d-bea557c7f6a8 is in state STARTED 2025-05-26 04:57:05.170761 | orchestrator | 2025-05-26 04:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:08.215386 | orchestrator | 2025-05-26 04:57:08 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:08.215537 | orchestrator | 2025-05-26 04:57:08 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:08.219230 | orchestrator | 2025-05-26 04:57:08 | INFO  | Task 76daabfd-572a-4002-ac8d-bea557c7f6a8 is in state STARTED 2025-05-26 04:57:08.219264 | orchestrator | 2025-05-26 04:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:11.265866 | orchestrator | 2025-05-26 04:57:11 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:11.267802 | orchestrator | 2025-05-26 04:57:11 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:11.271870 | orchestrator | 2025-05-26 04:57:11 | INFO  | Task 76daabfd-572a-4002-ac8d-bea557c7f6a8 is in state STARTED 2025-05-26 04:57:11.271912 | orchestrator | 2025-05-26 04:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:14.320172 | orchestrator | 2025-05-26 04:57:14 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:14.322256 | orchestrator | 2025-05-26 04:57:14 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:14.322820 | orchestrator | 2025-05-26 04:57:14 | INFO  | Task 76daabfd-572a-4002-ac8d-bea557c7f6a8 is in state STARTED 2025-05-26 04:57:14.325428 | orchestrator | 2025-05-26 04:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:17.367512 | orchestrator | 2025-05-26 04:57:17 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:17.368142 | orchestrator | 2025-05-26 04:57:17 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:17.369307 | orchestrator | 2025-05-26 04:57:17 | INFO  | Task 76daabfd-572a-4002-ac8d-bea557c7f6a8 is in state STARTED 2025-05-26 04:57:17.369387 | orchestrator | 2025-05-26 04:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:20.435404 | orchestrator | 2025-05-26 04:57:20 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:20.435515 | orchestrator | 2025-05-26 04:57:20 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:20.436287 | orchestrator | 2025-05-26 04:57:20 | INFO  | Task 76daabfd-572a-4002-ac8d-bea557c7f6a8 is in state STARTED 2025-05-26 04:57:20.436328 | orchestrator | 2025-05-26 04:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:23.479117 | orchestrator | 2025-05-26 04:57:23 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:23.479607 | orchestrator | 2025-05-26 04:57:23 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:23.480470 | orchestrator | 2025-05-26 04:57:23 | INFO  | Task 76daabfd-572a-4002-ac8d-bea557c7f6a8 is in state SUCCESS 2025-05-26 04:57:23.480495 | orchestrator | 2025-05-26 04:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:26.539805 | orchestrator | 2025-05-26 04:57:26 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:26.543746 | orchestrator | 2025-05-26 04:57:26 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:26.543830 | orchestrator | 2025-05-26 04:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:29.595399 | orchestrator | 2025-05-26 04:57:29 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:29.596062 | orchestrator | 2025-05-26 04:57:29 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:29.596322 | orchestrator | 2025-05-26 04:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:32.649793 | orchestrator | 2025-05-26 04:57:32 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state STARTED 2025-05-26 04:57:32.651546 | orchestrator | 2025-05-26 04:57:32 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 04:57:32.651699 | orchestrator | 2025-05-26 04:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-26 04:57:35.716171 | orchestrator | 2025-05-26 04:57:35 | INFO  | Task f3fb6195-10f2-4ffc-97d6-cddfa9db6521 is in state SUCCESS 2025-05-26 04:57:35.717957 | orchestrator | 2025-05-26 04:57:35.718120 | orchestrator | None 2025-05-26 04:57:35.718136 | orchestrator | 2025-05-26 04:57:35.718145 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-26 04:57:35.718154 | orchestrator | 2025-05-26 04:57:35.718161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-26 04:57:35.718169 | orchestrator | Monday 26 May 2025 04:51:06 +0000 (0:00:00.530) 0:00:00.530 ************ 2025-05-26 04:57:35.718177 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.718186 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.718193 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.718201 | orchestrator | 2025-05-26 04:57:35.718208 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-26 04:57:35.718215 | orchestrator | Monday 26 May 2025 04:51:06 +0000 (0:00:00.298) 0:00:00.828 ************ 2025-05-26 04:57:35.718223 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-26 04:57:35.718231 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-26 04:57:35.718238 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-26 04:57:35.718245 | orchestrator | 2025-05-26 04:57:35.718252 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-26 04:57:35.718259 | orchestrator | 2025-05-26 04:57:35.718266 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-26 04:57:35.718273 | orchestrator | Monday 26 May 2025 04:51:07 +0000 (0:00:00.733) 0:00:01.562 ************ 2025-05-26 04:57:35.718280 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.718287 | orchestrator | 2025-05-26 04:57:35.718295 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-26 04:57:35.718302 | orchestrator | Monday 26 May 2025 04:51:08 +0000 (0:00:00.999) 0:00:02.561 ************ 2025-05-26 04:57:35.718309 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.718316 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.718323 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.718330 | orchestrator | 2025-05-26 04:57:35.718337 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-26 04:57:35.718344 | orchestrator | Monday 26 May 2025 04:51:09 +0000 (0:00:00.868) 0:00:03.430 ************ 2025-05-26 04:57:35.718353 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.718360 | orchestrator | 2025-05-26 04:57:35.718367 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-26 04:57:35.718374 | orchestrator | Monday 26 May 2025 04:51:11 +0000 (0:00:01.598) 0:00:05.028 ************ 2025-05-26 04:57:35.718381 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.718388 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.718395 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.718402 | orchestrator | 2025-05-26 04:57:35.718423 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-26 04:57:35.718431 | orchestrator | Monday 26 May 2025 04:51:11 +0000 (0:00:00.776) 0:00:05.805 ************ 2025-05-26 04:57:35.718438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-26 04:57:35.718445 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-26 04:57:35.718471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-26 04:57:35.718479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-26 04:57:35.718788 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-26 04:57:35.718799 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-26 04:57:35.718808 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-26 04:57:35.718815 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-26 04:57:35.718822 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-26 04:57:35.718830 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-26 04:57:35.718837 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-26 04:57:35.718844 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-26 04:57:35.718851 | orchestrator | 2025-05-26 04:57:35.718858 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-26 04:57:35.718865 | orchestrator | Monday 26 May 2025 04:51:16 +0000 (0:00:04.154) 0:00:09.959 ************ 2025-05-26 04:57:35.718873 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-26 04:57:35.718880 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-26 04:57:35.718887 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-26 04:57:35.718894 | orchestrator | 2025-05-26 04:57:35.718901 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-26 04:57:35.718909 | orchestrator | Monday 26 May 2025 04:51:16 +0000 (0:00:00.696) 0:00:10.655 ************ 2025-05-26 04:57:35.718916 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-26 04:57:35.718923 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-26 04:57:35.718930 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-26 04:57:35.718937 | orchestrator | 2025-05-26 04:57:35.718945 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-26 04:57:35.718952 | orchestrator | Monday 26 May 2025 04:51:18 +0000 (0:00:01.450) 0:00:12.105 ************ 2025-05-26 04:57:35.718959 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-26 04:57:35.718966 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.719048 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-26 04:57:35.719065 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.719076 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-26 04:57:35.719087 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.719097 | orchestrator | 2025-05-26 04:57:35.719108 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-26 04:57:35.719119 | orchestrator | Monday 26 May 2025 04:51:18 +0000 (0:00:00.755) 0:00:12.860 ************ 2025-05-26 04:57:35.719217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.719236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.719269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.719282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.719294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.719317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.719330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.719342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.719361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.719373 | orchestrator | 2025-05-26 04:57:35.719386 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-26 04:57:35.719399 | orchestrator | Monday 26 May 2025 04:51:22 +0000 (0:00:03.580) 0:00:16.441 ************ 2025-05-26 04:57:35.719413 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.719432 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.719443 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.719455 | orchestrator | 2025-05-26 04:57:35.719473 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-26 04:57:35.719486 | orchestrator | Monday 26 May 2025 04:51:23 +0000 (0:00:01.157) 0:00:17.599 ************ 2025-05-26 04:57:35.719499 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-26 04:57:35.719512 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-26 04:57:35.719525 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-26 04:57:35.719538 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-26 04:57:35.719549 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-26 04:57:35.719558 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-26 04:57:35.719566 | orchestrator | 2025-05-26 04:57:35.719574 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-26 04:57:35.719582 | orchestrator | Monday 26 May 2025 04:51:27 +0000 (0:00:03.423) 0:00:21.022 ************ 2025-05-26 04:57:35.719877 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.719892 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.719899 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.719906 | orchestrator | 2025-05-26 04:57:35.719913 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-26 04:57:35.719920 | orchestrator | Monday 26 May 2025 04:51:28 +0000 (0:00:01.165) 0:00:22.187 ************ 2025-05-26 04:57:35.719927 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.719935 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.719942 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.719949 | orchestrator | 2025-05-26 04:57:35.719957 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-26 04:57:35.719964 | orchestrator | Monday 26 May 2025 04:51:29 +0000 (0:00:01.203) 0:00:23.390 ************ 2025-05-26 04:57:35.719972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.720042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.720062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.720071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-26 04:57:35.720079 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.720092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.720100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.720108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.720115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.720143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.720157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-26 04:57:35.720164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.720172 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.720183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-26 04:57:35.720191 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.720198 | orchestrator | 2025-05-26 04:57:35.720205 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-26 04:57:35.720213 | orchestrator | Monday 26 May 2025 04:51:30 +0000 (0:00:01.020) 0:00:24.410 ************ 2025-05-26 04:57:35.720220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.720320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-26 04:57:35.720328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.720344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-26 04:57:35.720405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.720423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228', '__omit_place_holder__bb82d29380ebb4f55cbc6d60263d7245339cc228'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-26 04:57:35.720431 | orchestrator | 2025-05-26 04:57:35.720438 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-26 04:57:35.720449 | orchestrator | Monday 26 May 2025 04:51:33 +0000 (0:00:03.255) 0:00:27.666 ************ 2025-05-26 04:57:35.720457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.720473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.722320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.722349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.722357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.722372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.722380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.722388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.722403 | orchestrator | 2025-05-26 04:57:35.722411 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-26 04:57:35.722420 | orchestrator | Monday 26 May 2025 04:51:37 +0000 (0:00:03.374) 0:00:31.040 ************ 2025-05-26 04:57:35.722427 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-26 04:57:35.722435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-26 04:57:35.722443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-26 04:57:35.722450 | orchestrator | 2025-05-26 04:57:35.722457 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-26 04:57:35.722464 | orchestrator | Monday 26 May 2025 04:51:39 +0000 (0:00:01.893) 0:00:32.933 ************ 2025-05-26 04:57:35.722472 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-26 04:57:35.722479 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-26 04:57:35.722503 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-26 04:57:35.722511 | orchestrator | 2025-05-26 04:57:35.722518 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-26 04:57:35.722525 | orchestrator | Monday 26 May 2025 04:51:43 +0000 (0:00:04.524) 0:00:37.458 ************ 2025-05-26 04:57:35.722533 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.722540 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.722547 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.722554 | orchestrator | 2025-05-26 04:57:35.722561 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-26 04:57:35.722568 | orchestrator | Monday 26 May 2025 04:51:45 +0000 (0:00:01.515) 0:00:38.973 ************ 2025-05-26 04:57:35.722576 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-26 04:57:35.722585 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-26 04:57:35.722592 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-26 04:57:35.722599 | orchestrator | 2025-05-26 04:57:35.722606 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-26 04:57:35.722613 | orchestrator | Monday 26 May 2025 04:51:47 +0000 (0:00:02.536) 0:00:41.510 ************ 2025-05-26 04:57:35.722620 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-26 04:57:35.722628 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-26 04:57:35.722635 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-26 04:57:35.722642 | orchestrator | 2025-05-26 04:57:35.722649 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-26 04:57:35.722656 | orchestrator | Monday 26 May 2025 04:51:50 +0000 (0:00:02.462) 0:00:43.972 ************ 2025-05-26 04:57:35.722663 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-26 04:57:35.722671 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-26 04:57:35.722678 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-26 04:57:35.722685 | orchestrator | 2025-05-26 04:57:35.722692 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-26 04:57:35.722702 | orchestrator | Monday 26 May 2025 04:51:51 +0000 (0:00:01.451) 0:00:45.424 ************ 2025-05-26 04:57:35.722715 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-26 04:57:35.722722 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-26 04:57:35.722730 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-26 04:57:35.722737 | orchestrator | 2025-05-26 04:57:35.722744 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-26 04:57:35.722751 | orchestrator | Monday 26 May 2025 04:51:53 +0000 (0:00:01.969) 0:00:47.393 ************ 2025-05-26 04:57:35.722758 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.722765 | orchestrator | 2025-05-26 04:57:35.722772 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-26 04:57:35.722780 | orchestrator | Monday 26 May 2025 04:51:54 +0000 (0:00:00.770) 0:00:48.163 ************ 2025-05-26 04:57:35.722787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.722795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.722818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.722827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.723153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.723180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.723189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.723198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.723206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.723215 | orchestrator | 2025-05-26 04:57:35.723223 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-26 04:57:35.723231 | orchestrator | Monday 26 May 2025 04:51:57 +0000 (0:00:03.340) 0:00:51.504 ************ 2025-05-26 04:57:35.723261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723289 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.723300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723323 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.723330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723375 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.723382 | orchestrator | 2025-05-26 04:57:35.723389 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-26 04:57:35.723436 | orchestrator | Monday 26 May 2025 04:51:58 +0000 (0:00:00.678) 0:00:52.183 ************ 2025-05-26 04:57:35.723449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723472 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.723480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723526 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.723534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723561 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.723568 | orchestrator | 2025-05-26 04:57:35.723575 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-26 04:57:35.723583 | orchestrator | Monday 26 May 2025 04:51:59 +0000 (0:00:01.414) 0:00:53.597 ************ 2025-05-26 04:57:35.723590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723634 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.723641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.723917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.723925 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.723933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.723941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724098 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.724106 | orchestrator | 2025-05-26 04:57:35.724113 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-26 04:57:35.724121 | orchestrator | Monday 26 May 2025 04:52:00 +0000 (0:00:00.690) 0:00:54.288 ************ 2025-05-26 04:57:35.724128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724156 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.724163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724190 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.724239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724263 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.724270 | orchestrator | 2025-05-26 04:57:35.724282 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-26 04:57:35.724293 | orchestrator | Monday 26 May 2025 04:52:01 +0000 (0:00:00.706) 0:00:54.995 ************ 2025-05-26 04:57:35.724310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724355 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.724388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724412 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.724424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724446 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.724454 | orchestrator | 2025-05-26 04:57:35.724466 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-26 04:57:35.724473 | orchestrator | Monday 26 May 2025 04:52:03 +0000 (0:00:02.412) 0:00:57.408 ************ 2025-05-26 04:57:35.724481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724521 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.724528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724555 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.724584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724633 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.724644 | orchestrator | 2025-05-26 04:57:35.724654 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-26 04:57:35.724665 | orchestrator | Monday 26 May 2025 04:52:04 +0000 (0:00:00.838) 0:00:58.246 ************ 2025-05-26 04:57:35.724675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.724701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.724717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.724727 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.725275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.725364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.725378 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.725387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.725396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.725410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.725418 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.725427 | orchestrator | 2025-05-26 04:57:35.725435 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-26 04:57:35.725443 | orchestrator | Monday 26 May 2025 04:52:05 +0000 (0:00:00.742) 0:00:58.988 ************ 2025-05-26 04:57:35.725460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.725468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.725476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.725485 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.725513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.725522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.725530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.725538 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.725550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-26 04:57:35.725564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-26 04:57:35.725573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-26 04:57:35.725581 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.725589 | orchestrator | 2025-05-26 04:57:35.725597 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-26 04:57:35.725605 | orchestrator | Monday 26 May 2025 04:52:06 +0000 (0:00:00.886) 0:00:59.874 ************ 2025-05-26 04:57:35.725613 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-26 04:57:35.725621 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-26 04:57:35.725984 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-26 04:57:35.726062 | orchestrator | 2025-05-26 04:57:35.726072 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-26 04:57:35.726080 | orchestrator | Monday 26 May 2025 04:52:07 +0000 (0:00:01.324) 0:01:01.199 ************ 2025-05-26 04:57:35.726088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-26 04:57:35.726096 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-26 04:57:35.726103 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-26 04:57:35.726111 | orchestrator | 2025-05-26 04:57:35.726119 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-26 04:57:35.726127 | orchestrator | Monday 26 May 2025 04:52:08 +0000 (0:00:01.405) 0:01:02.605 ************ 2025-05-26 04:57:35.726134 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-26 04:57:35.726142 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-26 04:57:35.726150 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-26 04:57:35.726158 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-26 04:57:35.726166 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.726174 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-26 04:57:35.726192 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.726200 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-26 04:57:35.726208 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.726216 | orchestrator | 2025-05-26 04:57:35.726224 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-26 04:57:35.726232 | orchestrator | Monday 26 May 2025 04:52:09 +0000 (0:00:00.919) 0:01:03.525 ************ 2025-05-26 04:57:35.726246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.726256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.726264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-26 04:57:35.726352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.726364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.726372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-26 04:57:35.726387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.726400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.726409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-26 04:57:35.726417 | orchestrator | 2025-05-26 04:57:35.726425 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-26 04:57:35.726433 | orchestrator | Monday 26 May 2025 04:52:12 +0000 (0:00:02.732) 0:01:06.257 ************ 2025-05-26 04:57:35.726441 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.726449 | orchestrator | 2025-05-26 04:57:35.726456 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-26 04:57:35.726464 | orchestrator | Monday 26 May 2025 04:52:13 +0000 (0:00:00.806) 0:01:07.063 ************ 2025-05-26 04:57:35.726507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-26 04:57:35.726856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.726890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.726905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.726920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-26 04:57:35.726933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.726947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.727101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.727125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-26 04:57:35.727180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.727202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.727217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.727229 | orchestrator | 2025-05-26 04:57:35.727243 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-26 04:57:35.727256 | orchestrator | Monday 26 May 2025 04:52:17 +0000 (0:00:04.135) 0:01:11.198 ************ 2025-05-26 04:57:35.727270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-26 04:57:35.727371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.727404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.727419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.727432 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.728321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-26 04:57:35.728374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-26 04:57:35.728383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.728459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.728479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.728486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.728498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.728505 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.728512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.728519 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.728526 | orchestrator | 2025-05-26 04:57:35.728533 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-26 04:57:35.728541 | orchestrator | Monday 26 May 2025 04:52:18 +0000 (0:00:00.832) 0:01:12.031 ************ 2025-05-26 04:57:35.728548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-26 04:57:35.728557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-26 04:57:35.728564 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.729372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-26 04:57:35.729399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-26 04:57:35.729407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-26 04:57:35.729422 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.729429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-26 04:57:35.729436 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.729443 | orchestrator | 2025-05-26 04:57:35.729508 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-26 04:57:35.729518 | orchestrator | Monday 26 May 2025 04:52:19 +0000 (0:00:01.532) 0:01:13.564 ************ 2025-05-26 04:57:35.729525 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.729531 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.729538 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.729544 | orchestrator | 2025-05-26 04:57:35.729551 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-26 04:57:35.729558 | orchestrator | Monday 26 May 2025 04:52:21 +0000 (0:00:01.517) 0:01:15.081 ************ 2025-05-26 04:57:35.729564 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.729571 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.729782 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.729791 | orchestrator | 2025-05-26 04:57:35.729798 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-26 04:57:35.729805 | orchestrator | Monday 26 May 2025 04:52:23 +0000 (0:00:02.297) 0:01:17.379 ************ 2025-05-26 04:57:35.729811 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.729818 | orchestrator | 2025-05-26 04:57:35.729825 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-26 04:57:35.729831 | orchestrator | Monday 26 May 2025 04:52:24 +0000 (0:00:01.149) 0:01:18.528 ************ 2025-05-26 04:57:35.729846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.729855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.729863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.729877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.729938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.729948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.729959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.730103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.730120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.730127 | orchestrator | 2025-05-26 04:57:35.730134 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-26 04:57:35.730142 | orchestrator | Monday 26 May 2025 04:52:29 +0000 (0:00:04.916) 0:01:23.444 ************ 2025-05-26 04:57:35.730193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.730203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.730210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.730217 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.730229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.730243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.730250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.730257 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.734235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.734321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.734358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.734371 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.734382 | orchestrator | 2025-05-26 04:57:35.734393 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-26 04:57:35.734404 | orchestrator | Monday 26 May 2025 04:52:30 +0000 (0:00:00.577) 0:01:24.021 ************ 2025-05-26 04:57:35.734414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-26 04:57:35.734446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-26 04:57:35.734457 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.734467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-26 04:57:35.734476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-26 04:57:35.734486 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.734496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-26 04:57:35.734539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-26 04:57:35.734550 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.734560 | orchestrator | 2025-05-26 04:57:35.734570 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-26 04:57:35.734580 | orchestrator | Monday 26 May 2025 04:52:31 +0000 (0:00:01.709) 0:01:25.731 ************ 2025-05-26 04:57:35.734605 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.734615 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.734637 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.734657 | orchestrator | 2025-05-26 04:57:35.734706 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-26 04:57:35.734725 | orchestrator | Monday 26 May 2025 04:52:34 +0000 (0:00:02.451) 0:01:28.182 ************ 2025-05-26 04:57:35.734741 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.734758 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.734769 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.734778 | orchestrator | 2025-05-26 04:57:35.734804 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-26 04:57:35.734814 | orchestrator | Monday 26 May 2025 04:52:36 +0000 (0:00:02.091) 0:01:30.274 ************ 2025-05-26 04:57:35.734823 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.734833 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.734842 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.734852 | orchestrator | 2025-05-26 04:57:35.734861 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-26 04:57:35.734871 | orchestrator | Monday 26 May 2025 04:52:36 +0000 (0:00:00.317) 0:01:30.592 ************ 2025-05-26 04:57:35.734880 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.734890 | orchestrator | 2025-05-26 04:57:35.734899 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-26 04:57:35.734909 | orchestrator | Monday 26 May 2025 04:52:37 +0000 (0:00:00.672) 0:01:31.264 ************ 2025-05-26 04:57:35.734919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-26 04:57:35.734950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-26 04:57:35.734961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-26 04:57:35.734971 | orchestrator | 2025-05-26 04:57:35.734981 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-26 04:57:35.734991 | orchestrator | Monday 26 May 2025 04:52:40 +0000 (0:00:03.368) 0:01:34.633 ************ 2025-05-26 04:57:35.735041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-26 04:57:35.735053 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.735063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-26 04:57:35.735111 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.735142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-26 04:57:35.735153 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.735163 | orchestrator | 2025-05-26 04:57:35.735172 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-26 04:57:35.735182 | orchestrator | Monday 26 May 2025 04:52:44 +0000 (0:00:03.379) 0:01:38.013 ************ 2025-05-26 04:57:35.735226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-26 04:57:35.735251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-26 04:57:35.735269 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.735286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-26 04:57:35.735303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-26 04:57:35.735372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-26 04:57:35.735390 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.735407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-26 04:57:35.735460 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.735494 | orchestrator | 2025-05-26 04:57:35.735505 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-26 04:57:35.735514 | orchestrator | Monday 26 May 2025 04:52:46 +0000 (0:00:02.148) 0:01:40.161 ************ 2025-05-26 04:57:35.735524 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.735533 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.735543 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.735569 | orchestrator | 2025-05-26 04:57:35.735579 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-26 04:57:35.735589 | orchestrator | Monday 26 May 2025 04:52:47 +0000 (0:00:00.744) 0:01:40.906 ************ 2025-05-26 04:57:35.735598 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.735608 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.735618 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.735627 | orchestrator | 2025-05-26 04:57:35.735643 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-26 04:57:35.735666 | orchestrator | Monday 26 May 2025 04:52:47 +0000 (0:00:00.791) 0:01:41.697 ************ 2025-05-26 04:57:35.735685 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.735701 | orchestrator | 2025-05-26 04:57:35.735719 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-26 04:57:35.735735 | orchestrator | Monday 26 May 2025 04:52:48 +0000 (0:00:00.775) 0:01:42.473 ************ 2025-05-26 04:57:35.735780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.735794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.735858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.735897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.735949 | orchestrator | 2025-05-26 04:57:35.735959 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-26 04:57:35.735969 | orchestrator | Monday 26 May 2025 04:52:53 +0000 (0:00:05.240) 0:01:47.713 ************ 2025-05-26 04:57:35.735979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.735990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736081 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.736102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.736113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736231 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.736246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.736267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.736349 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.736365 | orchestrator | 2025-05-26 04:57:35.736380 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-26 04:57:35.736395 | orchestrator | Monday 26 May 2025 04:52:55 +0000 (0:00:01.300) 0:01:49.013 ************ 2025-05-26 04:57:35.736411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-26 04:57:35.736449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-26 04:57:35.736466 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.736481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-26 04:57:35.736495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-26 04:57:35.736510 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.736547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-26 04:57:35.736562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-26 04:57:35.736577 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.736592 | orchestrator | 2025-05-26 04:57:35.736607 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-26 04:57:35.736620 | orchestrator | Monday 26 May 2025 04:52:56 +0000 (0:00:01.243) 0:01:50.257 ************ 2025-05-26 04:57:35.736633 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.736648 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.736664 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.736680 | orchestrator | 2025-05-26 04:57:35.736694 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-26 04:57:35.736710 | orchestrator | Monday 26 May 2025 04:52:57 +0000 (0:00:01.484) 0:01:51.741 ************ 2025-05-26 04:57:35.736725 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.736739 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.736754 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.736770 | orchestrator | 2025-05-26 04:57:35.736787 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-26 04:57:35.736803 | orchestrator | Monday 26 May 2025 04:53:00 +0000 (0:00:02.517) 0:01:54.259 ************ 2025-05-26 04:57:35.736828 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.736839 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.736848 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.736858 | orchestrator | 2025-05-26 04:57:35.736867 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-26 04:57:35.736876 | orchestrator | Monday 26 May 2025 04:53:01 +0000 (0:00:00.702) 0:01:54.961 ************ 2025-05-26 04:57:35.736886 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.736895 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.736914 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.736923 | orchestrator | 2025-05-26 04:57:35.736933 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-26 04:57:35.736942 | orchestrator | Monday 26 May 2025 04:53:01 +0000 (0:00:00.428) 0:01:55.389 ************ 2025-05-26 04:57:35.736952 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.736961 | orchestrator | 2025-05-26 04:57:35.736971 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-26 04:57:35.736980 | orchestrator | Monday 26 May 2025 04:53:02 +0000 (0:00:00.856) 0:01:56.246 ************ 2025-05-26 04:57:35.736991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-26 04:57:35.737152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-26 04:57:35.737175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-26 04:57:35.737215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-26 04:57:35.737315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-26 04:57:35.737469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-26 04:57:35.737479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.737560 | orchestrator | 2025-05-26 04:57:35.737570 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-26 04:57:35.737580 | orchestrator | Monday 26 May 2025 04:53:06 +0000 (0:00:04.425) 0:02:00.671 ************ 2025-05-26 04:57:35.737606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-26 04:57:35.737617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-26 04:57:35.739105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-26 04:57:35.739208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-26 04:57:35.739241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739252 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.739269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739363 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.739373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-26 04:57:35.739395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-26 04:57:35.739423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.739526 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.739536 | orchestrator | 2025-05-26 04:57:35.739546 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-26 04:57:35.739556 | orchestrator | Monday 26 May 2025 04:53:07 +0000 (0:00:00.830) 0:02:01.501 ************ 2025-05-26 04:57:35.739566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-26 04:57:35.739594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-26 04:57:35.739606 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.739621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-26 04:57:35.739638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-26 04:57:35.739653 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.739669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-26 04:57:35.739685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-26 04:57:35.739702 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.739718 | orchestrator | 2025-05-26 04:57:35.739740 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-26 04:57:35.739759 | orchestrator | Monday 26 May 2025 04:53:08 +0000 (0:00:00.992) 0:02:02.494 ************ 2025-05-26 04:57:35.739780 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.739800 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.739815 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.739830 | orchestrator | 2025-05-26 04:57:35.739847 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-26 04:57:35.739862 | orchestrator | Monday 26 May 2025 04:53:10 +0000 (0:00:01.975) 0:02:04.469 ************ 2025-05-26 04:57:35.739877 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.739893 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.739909 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.739925 | orchestrator | 2025-05-26 04:57:35.739941 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-26 04:57:35.739955 | orchestrator | Monday 26 May 2025 04:53:12 +0000 (0:00:01.958) 0:02:06.427 ************ 2025-05-26 04:57:35.739972 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.739988 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.740035 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.740051 | orchestrator | 2025-05-26 04:57:35.740068 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-26 04:57:35.740078 | orchestrator | Monday 26 May 2025 04:53:12 +0000 (0:00:00.300) 0:02:06.728 ************ 2025-05-26 04:57:35.740088 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.740097 | orchestrator | 2025-05-26 04:57:35.740118 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-26 04:57:35.740127 | orchestrator | Monday 26 May 2025 04:53:13 +0000 (0:00:00.854) 0:02:07.583 ************ 2025-05-26 04:57:35.740163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-26 04:57:35.740183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.740202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-26 04:57:35.740282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.740330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-26 04:57:35.740354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.740365 | orchestrator | 2025-05-26 04:57:35.740376 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-26 04:57:35.740402 | orchestrator | Monday 26 May 2025 04:53:18 +0000 (0:00:04.346) 0:02:11.929 ************ 2025-05-26 04:57:35.740429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-26 04:57:35.740451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.740463 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.740473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-26 04:57:35.740525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.740537 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.740554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-26 04:57:35.740638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.740665 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.740682 | orchestrator | 2025-05-26 04:57:35.740699 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-26 04:57:35.740710 | orchestrator | Monday 26 May 2025 04:53:21 +0000 (0:00:03.701) 0:02:15.630 ************ 2025-05-26 04:57:35.740740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-26 04:57:35.740784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-26 04:57:35.740796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-26 04:57:35.740807 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.740824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-26 04:57:35.740851 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.740867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-26 04:57:35.740908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-26 04:57:35.740927 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.740943 | orchestrator | 2025-05-26 04:57:35.740960 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-26 04:57:35.741026 | orchestrator | Monday 26 May 2025 04:53:24 +0000 (0:00:03.029) 0:02:18.660 ************ 2025-05-26 04:57:35.741044 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.741061 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.741077 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.741097 | orchestrator | 2025-05-26 04:57:35.741119 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-26 04:57:35.741135 | orchestrator | Monday 26 May 2025 04:53:26 +0000 (0:00:01.519) 0:02:20.179 ************ 2025-05-26 04:57:35.741151 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.741167 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.741183 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.741199 | orchestrator | 2025-05-26 04:57:35.741214 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-26 04:57:35.741224 | orchestrator | Monday 26 May 2025 04:53:28 +0000 (0:00:02.063) 0:02:22.242 ************ 2025-05-26 04:57:35.741234 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.741243 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.741252 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.741262 | orchestrator | 2025-05-26 04:57:35.741271 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-26 04:57:35.741281 | orchestrator | Monday 26 May 2025 04:53:28 +0000 (0:00:00.321) 0:02:22.564 ************ 2025-05-26 04:57:35.741290 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.741300 | orchestrator | 2025-05-26 04:57:35.741309 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-26 04:57:35.741319 | orchestrator | Monday 26 May 2025 04:53:29 +0000 (0:00:00.944) 0:02:23.509 ************ 2025-05-26 04:57:35.741336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-26 04:57:35.741356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-26 04:57:35.741366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-26 04:57:35.741376 | orchestrator | 2025-05-26 04:57:35.741386 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-26 04:57:35.741396 | orchestrator | Monday 26 May 2025 04:53:33 +0000 (0:00:03.790) 0:02:27.299 ************ 2025-05-26 04:57:35.741427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-26 04:57:35.741438 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.741448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-26 04:57:35.741458 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.741468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-26 04:57:35.741485 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.741514 | orchestrator | 2025-05-26 04:57:35.741530 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-26 04:57:35.741540 | orchestrator | Monday 26 May 2025 04:53:33 +0000 (0:00:00.365) 0:02:27.664 ************ 2025-05-26 04:57:35.741549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-26 04:57:35.741559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-26 04:57:35.741569 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.741579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-26 04:57:35.741589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-26 04:57:35.741598 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.741608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-26 04:57:35.741617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-26 04:57:35.741627 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.741637 | orchestrator | 2025-05-26 04:57:35.741649 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-26 04:57:35.741666 | orchestrator | Monday 26 May 2025 04:53:34 +0000 (0:00:00.520) 0:02:28.185 ************ 2025-05-26 04:57:35.741681 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.741705 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.741722 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.741738 | orchestrator | 2025-05-26 04:57:35.741754 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-26 04:57:35.741770 | orchestrator | Monday 26 May 2025 04:53:35 +0000 (0:00:01.351) 0:02:29.536 ************ 2025-05-26 04:57:35.741787 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.741804 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.741820 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.741834 | orchestrator | 2025-05-26 04:57:35.741844 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-26 04:57:35.741854 | orchestrator | Monday 26 May 2025 04:53:37 +0000 (0:00:01.802) 0:02:31.339 ************ 2025-05-26 04:57:35.741863 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.741872 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.741902 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.741912 | orchestrator | 2025-05-26 04:57:35.741922 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-26 04:57:35.741931 | orchestrator | Monday 26 May 2025 04:53:37 +0000 (0:00:00.267) 0:02:31.607 ************ 2025-05-26 04:57:35.741941 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.741951 | orchestrator | 2025-05-26 04:57:35.741960 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-26 04:57:35.741970 | orchestrator | Monday 26 May 2025 04:53:38 +0000 (0:00:00.874) 0:02:32.481 ************ 2025-05-26 04:57:35.743437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-26 04:57:35.743536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-26 04:57:35.743572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-26 04:57:35.743587 | orchestrator | 2025-05-26 04:57:35.743600 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-26 04:57:35.743609 | orchestrator | Monday 26 May 2025 04:53:42 +0000 (0:00:04.184) 0:02:36.666 ************ 2025-05-26 04:57:35.743696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-26 04:57:35.743745 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.743766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-26 04:57:35.743792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-26 04:57:35.743808 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.743816 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.743823 | orchestrator | 2025-05-26 04:57:35.743831 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-26 04:57:35.743839 | orchestrator | Monday 26 May 2025 04:53:43 +0000 (0:00:00.864) 0:02:37.531 ************ 2025-05-26 04:57:35.743852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-26 04:57:35.743863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-26 04:57:35.743889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-26 04:57:35.743898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-26 04:57:35.743907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-26 04:57:35.743916 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.743924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-26 04:57:35.743932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-26 04:57:35.743952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-26 04:57:35.743966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-26 04:57:35.743974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-26 04:57:35.743982 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.743990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-26 04:57:35.744019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-26 04:57:35.744029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-26 04:57:35.744060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-26 04:57:35.744070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-26 04:57:35.744077 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.744085 | orchestrator | 2025-05-26 04:57:35.744093 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-26 04:57:35.744101 | orchestrator | Monday 26 May 2025 04:53:44 +0000 (0:00:00.989) 0:02:38.520 ************ 2025-05-26 04:57:35.744109 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.744117 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.744125 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.744132 | orchestrator | 2025-05-26 04:57:35.744140 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-26 04:57:35.744148 | orchestrator | Monday 26 May 2025 04:53:46 +0000 (0:00:01.684) 0:02:40.205 ************ 2025-05-26 04:57:35.744156 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.744164 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.744171 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.744179 | orchestrator | 2025-05-26 04:57:35.744187 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-26 04:57:35.744195 | orchestrator | Monday 26 May 2025 04:53:48 +0000 (0:00:02.021) 0:02:42.226 ************ 2025-05-26 04:57:35.744203 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.744210 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.744218 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.744226 | orchestrator | 2025-05-26 04:57:35.744233 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-26 04:57:35.744241 | orchestrator | Monday 26 May 2025 04:53:48 +0000 (0:00:00.324) 0:02:42.550 ************ 2025-05-26 04:57:35.744249 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.744262 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.744269 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.744277 | orchestrator | 2025-05-26 04:57:35.744287 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-26 04:57:35.744301 | orchestrator | Monday 26 May 2025 04:53:48 +0000 (0:00:00.309) 0:02:42.859 ************ 2025-05-26 04:57:35.744311 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.744319 | orchestrator | 2025-05-26 04:57:35.744326 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-26 04:57:35.744334 | orchestrator | Monday 26 May 2025 04:53:50 +0000 (0:00:01.161) 0:02:44.021 ************ 2025-05-26 04:57:35.744360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-26 04:57:35.744370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-26 04:57:35.744380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-26 04:57:35.744393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-26 04:57:35.744410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-26 04:57:35.744418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-26 04:57:35.744449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-26 04:57:35.744459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-26 04:57:35.744473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-26 04:57:35.744481 | orchestrator | 2025-05-26 04:57:35.744489 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-26 04:57:35.744497 | orchestrator | Monday 26 May 2025 04:53:53 +0000 (0:00:03.826) 0:02:47.847 ************ 2025-05-26 04:57:35.744506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-26 04:57:35.744535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-26 04:57:35.744557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-26 04:57:35.744566 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.744574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-26 04:57:35.744587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-26 04:57:35.744596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-26 04:57:35.744609 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.744618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-26 04:57:35.744653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-26 04:57:35.744662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-26 04:57:35.744670 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.744678 | orchestrator | 2025-05-26 04:57:35.744686 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-26 04:57:35.744694 | orchestrator | Monday 26 May 2025 04:53:54 +0000 (0:00:00.701) 0:02:48.548 ************ 2025-05-26 04:57:35.744703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-26 04:57:35.744721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-26 04:57:35.744739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-26 04:57:35.744747 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.744764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-26 04:57:35.744772 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.744780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-26 04:57:35.744788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-26 04:57:35.744796 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.744804 | orchestrator | 2025-05-26 04:57:35.744812 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-26 04:57:35.744819 | orchestrator | Monday 26 May 2025 04:53:55 +0000 (0:00:01.225) 0:02:49.774 ************ 2025-05-26 04:57:35.744827 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.744835 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.744843 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.744850 | orchestrator | 2025-05-26 04:57:35.744858 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-26 04:57:35.744866 | orchestrator | Monday 26 May 2025 04:53:57 +0000 (0:00:01.337) 0:02:51.111 ************ 2025-05-26 04:57:35.744874 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.744882 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.744889 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.744897 | orchestrator | 2025-05-26 04:57:35.744905 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-26 04:57:35.744913 | orchestrator | Monday 26 May 2025 04:53:59 +0000 (0:00:02.118) 0:02:53.230 ************ 2025-05-26 04:57:35.744920 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.744928 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.744936 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.744944 | orchestrator | 2025-05-26 04:57:35.744952 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-26 04:57:35.744960 | orchestrator | Monday 26 May 2025 04:53:59 +0000 (0:00:00.319) 0:02:53.550 ************ 2025-05-26 04:57:35.744968 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.744975 | orchestrator | 2025-05-26 04:57:35.744983 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-26 04:57:35.744991 | orchestrator | Monday 26 May 2025 04:54:00 +0000 (0:00:01.285) 0:02:54.835 ************ 2025-05-26 04:57:35.745066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-26 04:57:35.745079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-26 04:57:35.745109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-26 04:57:35.745132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745140 | orchestrator | 2025-05-26 04:57:35.745148 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-26 04:57:35.745168 | orchestrator | Monday 26 May 2025 04:54:04 +0000 (0:00:03.236) 0:02:58.071 ************ 2025-05-26 04:57:35.745182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-26 04:57:35.745207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745225 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.745238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-26 04:57:35.745292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745308 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.745322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-26 04:57:35.745338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745369 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.745378 | orchestrator | 2025-05-26 04:57:35.745387 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-26 04:57:35.745394 | orchestrator | Monday 26 May 2025 04:54:04 +0000 (0:00:00.656) 0:02:58.727 ************ 2025-05-26 04:57:35.745403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-26 04:57:35.745412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-26 04:57:35.745420 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.745428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-26 04:57:35.745437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-26 04:57:35.745445 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.745452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-26 04:57:35.745460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-26 04:57:35.745468 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.745476 | orchestrator | 2025-05-26 04:57:35.745484 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-26 04:57:35.745491 | orchestrator | Monday 26 May 2025 04:54:06 +0000 (0:00:01.452) 0:03:00.180 ************ 2025-05-26 04:57:35.745499 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.745507 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.745514 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.745522 | orchestrator | 2025-05-26 04:57:35.745530 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-26 04:57:35.745538 | orchestrator | Monday 26 May 2025 04:54:07 +0000 (0:00:01.305) 0:03:01.486 ************ 2025-05-26 04:57:35.745545 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.745553 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.745561 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.745568 | orchestrator | 2025-05-26 04:57:35.745580 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-26 04:57:35.745587 | orchestrator | Monday 26 May 2025 04:54:09 +0000 (0:00:02.211) 0:03:03.698 ************ 2025-05-26 04:57:35.745606 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.745614 | orchestrator | 2025-05-26 04:57:35.745620 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-26 04:57:35.745627 | orchestrator | Monday 26 May 2025 04:54:10 +0000 (0:00:01.069) 0:03:04.768 ************ 2025-05-26 04:57:35.745634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-26 04:57:35.745641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-26 04:57:35.745684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-26 04:57:35.745713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745797 | orchestrator | 2025-05-26 04:57:35.745822 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-26 04:57:35.745829 | orchestrator | Monday 26 May 2025 04:54:14 +0000 (0:00:04.032) 0:03:08.800 ************ 2025-05-26 04:57:35.745837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-26 04:57:35.745844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745868 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.745889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-26 04:57:35.745914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745935 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.745945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-26 04:57:35.745952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.745991 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.746046 | orchestrator | 2025-05-26 04:57:35.746054 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-26 04:57:35.746061 | orchestrator | Monday 26 May 2025 04:54:15 +0000 (0:00:00.703) 0:03:09.503 ************ 2025-05-26 04:57:35.746068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-26 04:57:35.746074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-26 04:57:35.746081 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.746088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-26 04:57:35.746094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-26 04:57:35.746101 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.746108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-26 04:57:35.746128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-26 04:57:35.746135 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.746142 | orchestrator | 2025-05-26 04:57:35.746149 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-26 04:57:35.746155 | orchestrator | Monday 26 May 2025 04:54:16 +0000 (0:00:00.853) 0:03:10.357 ************ 2025-05-26 04:57:35.746166 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.746172 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.746179 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.746185 | orchestrator | 2025-05-26 04:57:35.746192 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-26 04:57:35.746198 | orchestrator | Monday 26 May 2025 04:54:18 +0000 (0:00:01.528) 0:03:11.885 ************ 2025-05-26 04:57:35.746205 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.746212 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.746224 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.746231 | orchestrator | 2025-05-26 04:57:35.746238 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-26 04:57:35.746244 | orchestrator | Monday 26 May 2025 04:54:19 +0000 (0:00:01.940) 0:03:13.826 ************ 2025-05-26 04:57:35.746251 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.746257 | orchestrator | 2025-05-26 04:57:35.746264 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-26 04:57:35.746271 | orchestrator | Monday 26 May 2025 04:54:21 +0000 (0:00:01.096) 0:03:14.922 ************ 2025-05-26 04:57:35.746277 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-26 04:57:35.746284 | orchestrator | 2025-05-26 04:57:35.746291 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-26 04:57:35.746297 | orchestrator | Monday 26 May 2025 04:54:23 +0000 (0:00:02.818) 0:03:17.741 ************ 2025-05-26 04:57:35.746318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-26 04:57:35.746330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-26 04:57:35.746342 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.746358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-26 04:57:35.746378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-26 04:57:35.746389 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.746407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-26 04:57:35.746425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-26 04:57:35.746442 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.746449 | orchestrator | 2025-05-26 04:57:35.746455 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-26 04:57:35.746462 | orchestrator | Monday 26 May 2025 04:54:26 +0000 (0:00:02.544) 0:03:20.285 ************ 2025-05-26 04:57:35.746469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-26 04:57:35.746481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-26 04:57:35.746489 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.746499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-26 04:57:35.746528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-26 04:57:35.746535 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.746565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-26 04:57:35.746578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-26 04:57:35.746597 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.746609 | orchestrator | 2025-05-26 04:57:35.746620 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-26 04:57:35.746630 | orchestrator | Monday 26 May 2025 04:54:28 +0000 (0:00:02.239) 0:03:22.525 ************ 2025-05-26 04:57:35.746641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-26 04:57:35.746663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-26 04:57:35.746670 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.746676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-26 04:57:35.746683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-26 04:57:35.746690 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.746714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-26 04:57:35.746722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-26 04:57:35.746734 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.746740 | orchestrator | 2025-05-26 04:57:35.746747 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-26 04:57:35.746754 | orchestrator | Monday 26 May 2025 04:54:31 +0000 (0:00:02.559) 0:03:25.085 ************ 2025-05-26 04:57:35.746760 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.746767 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.746773 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.746780 | orchestrator | 2025-05-26 04:57:35.746786 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-26 04:57:35.746793 | orchestrator | Monday 26 May 2025 04:54:33 +0000 (0:00:02.189) 0:03:27.275 ************ 2025-05-26 04:57:35.746799 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.746806 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.746812 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.746819 | orchestrator | 2025-05-26 04:57:35.746825 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-26 04:57:35.746832 | orchestrator | Monday 26 May 2025 04:54:34 +0000 (0:00:01.456) 0:03:28.731 ************ 2025-05-26 04:57:35.746842 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.746848 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.746868 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.746875 | orchestrator | 2025-05-26 04:57:35.746881 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-26 04:57:35.746888 | orchestrator | Monday 26 May 2025 04:54:35 +0000 (0:00:00.313) 0:03:29.045 ************ 2025-05-26 04:57:35.746895 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.746901 | orchestrator | 2025-05-26 04:57:35.746908 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-26 04:57:35.746919 | orchestrator | Monday 26 May 2025 04:54:36 +0000 (0:00:01.094) 0:03:30.139 ************ 2025-05-26 04:57:35.746929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-26 04:57:35.746937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-26 04:57:35.746955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-26 04:57:35.746967 | orchestrator | 2025-05-26 04:57:35.746974 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-26 04:57:35.746981 | orchestrator | Monday 26 May 2025 04:54:38 +0000 (0:00:01.782) 0:03:31.922 ************ 2025-05-26 04:57:35.746987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-26 04:57:35.747017 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.747030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-26 04:57:35.747050 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.747058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-26 04:57:35.747065 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.747071 | orchestrator | 2025-05-26 04:57:35.747078 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-26 04:57:35.747085 | orchestrator | Monday 26 May 2025 04:54:38 +0000 (0:00:00.398) 0:03:32.320 ************ 2025-05-26 04:57:35.747092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-26 04:57:35.747100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-26 04:57:35.747111 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.747118 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.747137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-26 04:57:35.747144 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.747151 | orchestrator | 2025-05-26 04:57:35.747157 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-26 04:57:35.747164 | orchestrator | Monday 26 May 2025 04:54:39 +0000 (0:00:00.609) 0:03:32.929 ************ 2025-05-26 04:57:35.747171 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.747177 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.747184 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.747190 | orchestrator | 2025-05-26 04:57:35.747197 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-26 04:57:35.747204 | orchestrator | Monday 26 May 2025 04:54:39 +0000 (0:00:00.727) 0:03:33.657 ************ 2025-05-26 04:57:35.747210 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.747217 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.747223 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.747229 | orchestrator | 2025-05-26 04:57:35.747236 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-26 04:57:35.747243 | orchestrator | Monday 26 May 2025 04:54:41 +0000 (0:00:01.225) 0:03:34.883 ************ 2025-05-26 04:57:35.747249 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.747256 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.747263 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.747269 | orchestrator | 2025-05-26 04:57:35.747276 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-26 04:57:35.747291 | orchestrator | Monday 26 May 2025 04:54:41 +0000 (0:00:00.323) 0:03:35.206 ************ 2025-05-26 04:57:35.747298 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.747304 | orchestrator | 2025-05-26 04:57:35.747311 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-26 04:57:35.747318 | orchestrator | Monday 26 May 2025 04:54:42 +0000 (0:00:01.458) 0:03:36.665 ************ 2025-05-26 04:57:35.747328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-26 04:57:35.747335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.747347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.747371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.747379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-26 04:57:35.747386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.747397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.747405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.747417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.747450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.747464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.747477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-26 04:57:35.747511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-26 04:57:35.747524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.748894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.748937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.748945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.748952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.748966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-26 04:57:35.748974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.749025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.749179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-26 04:57:35.749204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-26 04:57:35.749302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.749432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749445 | orchestrator | 2025-05-26 04:57:35.749452 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-26 04:57:35.749458 | orchestrator | Monday 26 May 2025 04:54:47 +0000 (0:00:04.722) 0:03:41.387 ************ 2025-05-26 04:57:35.749476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-26 04:57:35.749497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-26 04:57:35.749531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.749672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-26 04:57:35.749679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749716 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.749741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-26 04:57:35.749764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-26 04:57:35.749855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-26 04:57:35.749880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.749951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.749957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.749993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.750059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.750082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-26 04:57:35.750098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.750107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.750114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-26 04:57:35.750120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-26 04:57:35.750154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.750167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.750174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.750181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-26 04:57:35.750188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.750265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-26 04:57:35.750313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.750320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.750326 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.750333 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.750339 | orchestrator | 2025-05-26 04:57:35.750345 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-26 04:57:35.750352 | orchestrator | Monday 26 May 2025 04:54:49 +0000 (0:00:01.666) 0:03:43.054 ************ 2025-05-26 04:57:35.750359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-26 04:57:35.750369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-26 04:57:35.750375 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.750382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-26 04:57:35.750388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-26 04:57:35.750394 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.750400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-26 04:57:35.750406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-26 04:57:35.750412 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.750418 | orchestrator | 2025-05-26 04:57:35.750425 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-26 04:57:35.750431 | orchestrator | Monday 26 May 2025 04:54:51 +0000 (0:00:02.526) 0:03:45.581 ************ 2025-05-26 04:57:35.750437 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.750443 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.750449 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.750455 | orchestrator | 2025-05-26 04:57:35.750461 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-26 04:57:35.750472 | orchestrator | Monday 26 May 2025 04:54:53 +0000 (0:00:01.306) 0:03:46.888 ************ 2025-05-26 04:57:35.750478 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.750484 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.750490 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.750496 | orchestrator | 2025-05-26 04:57:35.750502 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-26 04:57:35.750508 | orchestrator | Monday 26 May 2025 04:54:55 +0000 (0:00:02.039) 0:03:48.927 ************ 2025-05-26 04:57:35.750514 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.750520 | orchestrator | 2025-05-26 04:57:35.750526 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-26 04:57:35.750532 | orchestrator | Monday 26 May 2025 04:54:56 +0000 (0:00:01.199) 0:03:50.126 ************ 2025-05-26 04:57:35.750550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.750558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.750568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.750575 | orchestrator | 2025-05-26 04:57:35.750581 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-26 04:57:35.750588 | orchestrator | Monday 26 May 2025 04:54:59 +0000 (0:00:03.674) 0:03:53.800 ************ 2025-05-26 04:57:35.750594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.750605 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.750622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.750633 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.750644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.750655 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.750665 | orchestrator | 2025-05-26 04:57:35.750676 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-26 04:57:35.750692 | orchestrator | Monday 26 May 2025 04:55:00 +0000 (0:00:00.503) 0:03:54.303 ************ 2025-05-26 04:57:35.750703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-26 04:57:35.750725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-26 04:57:35.750737 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.750748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-26 04:57:35.750759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-26 04:57:35.750779 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.750790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-26 04:57:35.750796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-26 04:57:35.750803 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.750809 | orchestrator | 2025-05-26 04:57:35.750815 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-26 04:57:35.750821 | orchestrator | Monday 26 May 2025 04:55:01 +0000 (0:00:00.733) 0:03:55.037 ************ 2025-05-26 04:57:35.750827 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.750837 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.750847 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.750857 | orchestrator | 2025-05-26 04:57:35.750867 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-26 04:57:35.750883 | orchestrator | Monday 26 May 2025 04:55:02 +0000 (0:00:01.690) 0:03:56.728 ************ 2025-05-26 04:57:35.750894 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.750904 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.750914 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.750925 | orchestrator | 2025-05-26 04:57:35.750935 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-26 04:57:35.750945 | orchestrator | Monday 26 May 2025 04:55:04 +0000 (0:00:02.066) 0:03:58.795 ************ 2025-05-26 04:57:35.750955 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.750963 | orchestrator | 2025-05-26 04:57:35.750969 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-26 04:57:35.750975 | orchestrator | Monday 26 May 2025 04:55:06 +0000 (0:00:01.236) 0:04:00.032 ************ 2025-05-26 04:57:35.751013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.751022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.751065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.751099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751121 | orchestrator | 2025-05-26 04:57:35.751138 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-26 04:57:35.751149 | orchestrator | Monday 26 May 2025 04:55:10 +0000 (0:00:04.195) 0:04:04.227 ************ 2025-05-26 04:57:35.751178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.751191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751221 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.751237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.751244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751257 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.751278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.751309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.751326 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.751332 | orchestrator | 2025-05-26 04:57:35.751339 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-26 04:57:35.751345 | orchestrator | Monday 26 May 2025 04:55:11 +0000 (0:00:00.959) 0:04:05.187 ************ 2025-05-26 04:57:35.751352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751378 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.751384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751420 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.751427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-26 04:57:35.751456 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.751462 | orchestrator | 2025-05-26 04:57:35.751469 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-26 04:57:35.751475 | orchestrator | Monday 26 May 2025 04:55:12 +0000 (0:00:00.847) 0:04:06.035 ************ 2025-05-26 04:57:35.751481 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.751488 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.751494 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.751500 | orchestrator | 2025-05-26 04:57:35.751506 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-26 04:57:35.751512 | orchestrator | Monday 26 May 2025 04:55:13 +0000 (0:00:01.616) 0:04:07.651 ************ 2025-05-26 04:57:35.751518 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.751525 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.751531 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.751537 | orchestrator | 2025-05-26 04:57:35.751546 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-26 04:57:35.751553 | orchestrator | Monday 26 May 2025 04:55:15 +0000 (0:00:02.095) 0:04:09.746 ************ 2025-05-26 04:57:35.751559 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.751565 | orchestrator | 2025-05-26 04:57:35.751571 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-26 04:57:35.751577 | orchestrator | Monday 26 May 2025 04:55:17 +0000 (0:00:01.569) 0:04:11.316 ************ 2025-05-26 04:57:35.751584 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-26 04:57:35.751590 | orchestrator | 2025-05-26 04:57:35.751596 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-26 04:57:35.751603 | orchestrator | Monday 26 May 2025 04:55:18 +0000 (0:00:01.086) 0:04:12.402 ************ 2025-05-26 04:57:35.751609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-26 04:57:35.751616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-26 04:57:35.751622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-26 04:57:35.751633 | orchestrator | 2025-05-26 04:57:35.751655 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-26 04:57:35.751662 | orchestrator | Monday 26 May 2025 04:55:22 +0000 (0:00:03.847) 0:04:16.250 ************ 2025-05-26 04:57:35.751669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.751675 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.751681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.751688 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.751694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.751701 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.751709 | orchestrator | 2025-05-26 04:57:35.751723 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-26 04:57:35.751734 | orchestrator | Monday 26 May 2025 04:55:23 +0000 (0:00:01.317) 0:04:17.567 ************ 2025-05-26 04:57:35.751748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-26 04:57:35.751761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-26 04:57:35.751773 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.751783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-26 04:57:35.751794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-26 04:57:35.751806 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.751817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-26 04:57:35.751828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-26 04:57:35.751842 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.751848 | orchestrator | 2025-05-26 04:57:35.751854 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-26 04:57:35.751860 | orchestrator | Monday 26 May 2025 04:55:25 +0000 (0:00:01.883) 0:04:19.451 ************ 2025-05-26 04:57:35.751866 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.751872 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.751878 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.751884 | orchestrator | 2025-05-26 04:57:35.751891 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-26 04:57:35.751897 | orchestrator | Monday 26 May 2025 04:55:28 +0000 (0:00:02.435) 0:04:21.886 ************ 2025-05-26 04:57:35.751903 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.751909 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.751915 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.751922 | orchestrator | 2025-05-26 04:57:35.751944 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-26 04:57:35.751956 | orchestrator | Monday 26 May 2025 04:55:31 +0000 (0:00:03.051) 0:04:24.937 ************ 2025-05-26 04:57:35.751967 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-26 04:57:35.751982 | orchestrator | 2025-05-26 04:57:35.752041 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-26 04:57:35.752055 | orchestrator | Monday 26 May 2025 04:55:31 +0000 (0:00:00.820) 0:04:25.757 ************ 2025-05-26 04:57:35.752066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.752078 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.752084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.752090 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.752102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.752108 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.752114 | orchestrator | 2025-05-26 04:57:35.752121 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-26 04:57:35.752127 | orchestrator | Monday 26 May 2025 04:55:33 +0000 (0:00:01.422) 0:04:27.180 ************ 2025-05-26 04:57:35.752133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.752145 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.752151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.752158 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.752164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-26 04:57:35.752170 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.752177 | orchestrator | 2025-05-26 04:57:35.752196 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-26 04:57:35.752203 | orchestrator | Monday 26 May 2025 04:55:35 +0000 (0:00:01.768) 0:04:28.949 ************ 2025-05-26 04:57:35.752209 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.752218 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.752229 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.752239 | orchestrator | 2025-05-26 04:57:35.752249 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-26 04:57:35.752265 | orchestrator | Monday 26 May 2025 04:55:36 +0000 (0:00:01.217) 0:04:30.166 ************ 2025-05-26 04:57:35.752278 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.752288 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.752299 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.752310 | orchestrator | 2025-05-26 04:57:35.752320 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-26 04:57:35.752331 | orchestrator | Monday 26 May 2025 04:55:38 +0000 (0:00:02.425) 0:04:32.591 ************ 2025-05-26 04:57:35.752337 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.752343 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.752349 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.752355 | orchestrator | 2025-05-26 04:57:35.752361 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-26 04:57:35.752368 | orchestrator | Monday 26 May 2025 04:55:42 +0000 (0:00:03.435) 0:04:36.027 ************ 2025-05-26 04:57:35.752374 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-26 04:57:35.752380 | orchestrator | 2025-05-26 04:57:35.752386 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-26 04:57:35.752392 | orchestrator | Monday 26 May 2025 04:55:43 +0000 (0:00:01.050) 0:04:37.077 ************ 2025-05-26 04:57:35.752399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-26 04:57:35.752431 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.752438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-26 04:57:35.752444 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.752450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-26 04:57:35.752457 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.752463 | orchestrator | 2025-05-26 04:57:35.752469 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-26 04:57:35.752475 | orchestrator | Monday 26 May 2025 04:55:44 +0000 (0:00:01.032) 0:04:38.110 ************ 2025-05-26 04:57:35.752482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-26 04:57:35.752488 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.752508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-26 04:57:35.752515 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.752522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-26 04:57:35.752528 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.752534 | orchestrator | 2025-05-26 04:57:35.752541 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-26 04:57:35.752547 | orchestrator | Monday 26 May 2025 04:55:45 +0000 (0:00:01.365) 0:04:39.475 ************ 2025-05-26 04:57:35.752553 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.752562 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.752569 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.752575 | orchestrator | 2025-05-26 04:57:35.752581 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-26 04:57:35.752587 | orchestrator | Monday 26 May 2025 04:55:47 +0000 (0:00:01.750) 0:04:41.226 ************ 2025-05-26 04:57:35.752592 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.752598 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.752603 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.752608 | orchestrator | 2025-05-26 04:57:35.752614 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-26 04:57:35.752619 | orchestrator | Monday 26 May 2025 04:55:49 +0000 (0:00:02.403) 0:04:43.630 ************ 2025-05-26 04:57:35.752625 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.752630 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.752635 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.752641 | orchestrator | 2025-05-26 04:57:35.752646 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-26 04:57:35.752652 | orchestrator | Monday 26 May 2025 04:55:53 +0000 (0:00:03.309) 0:04:46.940 ************ 2025-05-26 04:57:35.752661 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.752666 | orchestrator | 2025-05-26 04:57:35.752672 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-26 04:57:35.752677 | orchestrator | Monday 26 May 2025 04:55:54 +0000 (0:00:01.382) 0:04:48.323 ************ 2025-05-26 04:57:35.752683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.752689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-26 04:57:35.752695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.752731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.752737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-26 04:57:35.752743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.752795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.752804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-26 04:57:35.752817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.752844 | orchestrator | 2025-05-26 04:57:35.752853 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-26 04:57:35.752861 | orchestrator | Monday 26 May 2025 04:55:58 +0000 (0:00:03.740) 0:04:52.063 ************ 2025-05-26 04:57:35.752886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.752904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-26 04:57:35.752914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.752939 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.752945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.752966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-26 04:57:35.752972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.752981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.752987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-26 04:57:35.752993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.753011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.753030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-26 04:57:35.753037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.753042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-26 04:57:35.753048 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.753053 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.753059 | orchestrator | 2025-05-26 04:57:35.753064 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-26 04:57:35.753070 | orchestrator | Monday 26 May 2025 04:55:58 +0000 (0:00:00.619) 0:04:52.682 ************ 2025-05-26 04:57:35.753075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-26 04:57:35.753096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-26 04:57:35.753102 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.753108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-26 04:57:35.753113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-26 04:57:35.753118 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.753124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-26 04:57:35.753129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-26 04:57:35.753135 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.753140 | orchestrator | 2025-05-26 04:57:35.753145 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-26 04:57:35.753165 | orchestrator | Monday 26 May 2025 04:55:59 +0000 (0:00:00.813) 0:04:53.496 ************ 2025-05-26 04:57:35.753171 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.753176 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.753181 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.753186 | orchestrator | 2025-05-26 04:57:35.753192 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-26 04:57:35.753197 | orchestrator | Monday 26 May 2025 04:56:01 +0000 (0:00:01.585) 0:04:55.081 ************ 2025-05-26 04:57:35.753203 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.753208 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.753213 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.753219 | orchestrator | 2025-05-26 04:57:35.753224 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-26 04:57:35.753230 | orchestrator | Monday 26 May 2025 04:56:03 +0000 (0:00:02.109) 0:04:57.190 ************ 2025-05-26 04:57:35.753235 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.753240 | orchestrator | 2025-05-26 04:57:35.753246 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-26 04:57:35.753251 | orchestrator | Monday 26 May 2025 04:56:04 +0000 (0:00:01.397) 0:04:58.588 ************ 2025-05-26 04:57:35.753267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-26 04:57:35.753273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-26 04:57:35.753282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-26 04:57:35.753289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-26 04:57:35.753311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-26 04:57:35.753319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-26 04:57:35.753325 | orchestrator | 2025-05-26 04:57:35.753330 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-26 04:57:35.753336 | orchestrator | Monday 26 May 2025 04:56:10 +0000 (0:00:05.475) 0:05:04.064 ************ 2025-05-26 04:57:35.753345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-26 04:57:35.753355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-26 04:57:35.753360 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.753376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-26 04:57:35.753383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-26 04:57:35.753389 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.753397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-26 04:57:35.753407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-26 04:57:35.753413 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.753418 | orchestrator | 2025-05-26 04:57:35.753423 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-26 04:57:35.753429 | orchestrator | Monday 26 May 2025 04:56:11 +0000 (0:00:01.252) 0:05:05.317 ************ 2025-05-26 04:57:35.753434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-26 04:57:35.753440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-26 04:57:35.753460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-26 04:57:35.753466 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.753472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-26 04:57:35.753477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-26 04:57:35.753483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-26 04:57:35.753488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-26 04:57:35.753494 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.753499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-26 04:57:35.753505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-26 04:57:35.753510 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.753516 | orchestrator | 2025-05-26 04:57:35.753527 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-26 04:57:35.753533 | orchestrator | Monday 26 May 2025 04:56:12 +0000 (0:00:00.924) 0:05:06.241 ************ 2025-05-26 04:57:35.753538 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.753543 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.753549 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.753554 | orchestrator | 2025-05-26 04:57:35.753563 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-26 04:57:35.753568 | orchestrator | Monday 26 May 2025 04:56:12 +0000 (0:00:00.445) 0:05:06.687 ************ 2025-05-26 04:57:35.753573 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.753579 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.753584 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.753589 | orchestrator | 2025-05-26 04:57:35.753595 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-26 04:57:35.753600 | orchestrator | Monday 26 May 2025 04:56:14 +0000 (0:00:01.496) 0:05:08.183 ************ 2025-05-26 04:57:35.753605 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.753611 | orchestrator | 2025-05-26 04:57:35.753616 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-26 04:57:35.753622 | orchestrator | Monday 26 May 2025 04:56:16 +0000 (0:00:01.733) 0:05:09.917 ************ 2025-05-26 04:57:35.753627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-26 04:57:35.753633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-26 04:57:35.753649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.753655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-26 04:57:35.753665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.753673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-26 04:57:35.753679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.753685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.753690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.753696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.753711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-26 04:57:35.753721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-26 04:57:35.753727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.753735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.753741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.753747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-26 04:57:35.753756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-26 04:57:35.755025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-26 04:57:35.755051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-26 04:57:35.755062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-26 04:57:35.755109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-26 04:57:35.755114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755136 | orchestrator | 2025-05-26 04:57:35.755141 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-26 04:57:35.755146 | orchestrator | Monday 26 May 2025 04:56:20 +0000 (0:00:04.287) 0:05:14.205 ************ 2025-05-26 04:57:35.755151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-26 04:57:35.755160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-26 04:57:35.755165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-26 04:57:35.755194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-26 04:57:35.755199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755218 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.755223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-26 04:57:35.755231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-26 04:57:35.755240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-26 04:57:35.755265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-26 04:57:35.755270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755292 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.755297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-26 04:57:35.755302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-26 04:57:35.755307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.755317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.755332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-26 04:57:35.756550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-26 04:57:35.756592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.756598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-26 04:57:35.756603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-26 04:57:35.756608 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.756613 | orchestrator | 2025-05-26 04:57:35.756619 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-26 04:57:35.756624 | orchestrator | Monday 26 May 2025 04:56:21 +0000 (0:00:01.570) 0:05:15.775 ************ 2025-05-26 04:57:35.756637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-26 04:57:35.756643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-26 04:57:35.756649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-26 04:57:35.756663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-26 04:57:35.756670 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.756675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-26 04:57:35.756680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-26 04:57:35.756685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-26 04:57:35.756690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-26 04:57:35.756695 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.756699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-26 04:57:35.756704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-26 04:57:35.756712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-26 04:57:35.756717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-26 04:57:35.756722 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.756727 | orchestrator | 2025-05-26 04:57:35.756731 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-26 04:57:35.756736 | orchestrator | Monday 26 May 2025 04:56:22 +0000 (0:00:01.002) 0:05:16.778 ************ 2025-05-26 04:57:35.756741 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.756746 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.756750 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.756759 | orchestrator | 2025-05-26 04:57:35.756764 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-26 04:57:35.756768 | orchestrator | Monday 26 May 2025 04:56:23 +0000 (0:00:00.468) 0:05:17.247 ************ 2025-05-26 04:57:35.756773 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.756778 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.756782 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.756787 | orchestrator | 2025-05-26 04:57:35.756792 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-26 04:57:35.756797 | orchestrator | Monday 26 May 2025 04:56:25 +0000 (0:00:01.678) 0:05:18.925 ************ 2025-05-26 04:57:35.756801 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.756806 | orchestrator | 2025-05-26 04:57:35.756811 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-26 04:57:35.756815 | orchestrator | Monday 26 May 2025 04:56:26 +0000 (0:00:01.745) 0:05:20.670 ************ 2025-05-26 04:57:35.756828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:57:35.756835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:57:35.756843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-26 04:57:35.756853 | orchestrator | 2025-05-26 04:57:35.756858 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-26 04:57:35.756862 | orchestrator | Monday 26 May 2025 04:56:29 +0000 (0:00:02.453) 0:05:23.124 ************ 2025-05-26 04:57:35.756867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-26 04:57:35.756873 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.756880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-26 04:57:35.756885 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.756890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-26 04:57:35.756895 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.756900 | orchestrator | 2025-05-26 04:57:35.756905 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-26 04:57:35.756910 | orchestrator | Monday 26 May 2025 04:56:29 +0000 (0:00:00.383) 0:05:23.507 ************ 2025-05-26 04:57:35.756915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-26 04:57:35.756920 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.756925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-26 04:57:35.756933 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.756941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-26 04:57:35.756946 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.756950 | orchestrator | 2025-05-26 04:57:35.756955 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-26 04:57:35.756960 | orchestrator | Monday 26 May 2025 04:56:30 +0000 (0:00:01.078) 0:05:24.586 ************ 2025-05-26 04:57:35.756965 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.756969 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.756974 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.756979 | orchestrator | 2025-05-26 04:57:35.756984 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-26 04:57:35.756988 | orchestrator | Monday 26 May 2025 04:56:31 +0000 (0:00:00.432) 0:05:25.018 ************ 2025-05-26 04:57:35.756993 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.757013 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.757018 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.757023 | orchestrator | 2025-05-26 04:57:35.757028 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-26 04:57:35.757032 | orchestrator | Monday 26 May 2025 04:56:32 +0000 (0:00:01.358) 0:05:26.377 ************ 2025-05-26 04:57:35.757037 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-26 04:57:35.757042 | orchestrator | 2025-05-26 04:57:35.757047 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-26 04:57:35.757052 | orchestrator | Monday 26 May 2025 04:56:34 +0000 (0:00:01.800) 0:05:28.177 ************ 2025-05-26 04:57:35.757057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.757065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.757071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.757082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.757088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.757097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-26 04:57:35.757102 | orchestrator | 2025-05-26 04:57:35.757107 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-26 04:57:35.757112 | orchestrator | Monday 26 May 2025 04:56:40 +0000 (0:00:06.361) 0:05:34.539 ************ 2025-05-26 04:57:35.757117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.758387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.758418 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.758438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.758443 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.758466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-26 04:57:35.758471 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758475 | orchestrator | 2025-05-26 04:57:35.758480 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-26 04:57:35.758485 | orchestrator | Monday 26 May 2025 04:56:41 +0000 (0:00:00.633) 0:05:35.173 ************ 2025-05-26 04:57:35.758490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758510 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758533 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-26 04:57:35.758564 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758568 | orchestrator | 2025-05-26 04:57:35.758573 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-26 04:57:35.758577 | orchestrator | Monday 26 May 2025 04:56:43 +0000 (0:00:01.730) 0:05:36.903 ************ 2025-05-26 04:57:35.758582 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.758586 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.758591 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.758595 | orchestrator | 2025-05-26 04:57:35.758600 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-26 04:57:35.758604 | orchestrator | Monday 26 May 2025 04:56:44 +0000 (0:00:01.357) 0:05:38.260 ************ 2025-05-26 04:57:35.758609 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.758613 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.758617 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.758622 | orchestrator | 2025-05-26 04:57:35.758626 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-26 04:57:35.758631 | orchestrator | Monday 26 May 2025 04:56:46 +0000 (0:00:02.320) 0:05:40.581 ************ 2025-05-26 04:57:35.758635 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758640 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758644 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758649 | orchestrator | 2025-05-26 04:57:35.758653 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-26 04:57:35.758661 | orchestrator | Monday 26 May 2025 04:56:47 +0000 (0:00:00.334) 0:05:40.915 ************ 2025-05-26 04:57:35.758665 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758670 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758674 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758679 | orchestrator | 2025-05-26 04:57:35.758683 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-26 04:57:35.758688 | orchestrator | Monday 26 May 2025 04:56:47 +0000 (0:00:00.317) 0:05:41.232 ************ 2025-05-26 04:57:35.758692 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758697 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758701 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758706 | orchestrator | 2025-05-26 04:57:35.758710 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-26 04:57:35.758715 | orchestrator | Monday 26 May 2025 04:56:48 +0000 (0:00:00.788) 0:05:42.021 ************ 2025-05-26 04:57:35.758719 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758724 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758728 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758733 | orchestrator | 2025-05-26 04:57:35.758737 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-26 04:57:35.758742 | orchestrator | Monday 26 May 2025 04:56:48 +0000 (0:00:00.336) 0:05:42.358 ************ 2025-05-26 04:57:35.758746 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758751 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758755 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758764 | orchestrator | 2025-05-26 04:57:35.758768 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-26 04:57:35.758773 | orchestrator | Monday 26 May 2025 04:56:48 +0000 (0:00:00.369) 0:05:42.728 ************ 2025-05-26 04:57:35.758777 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.758782 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.758786 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.758791 | orchestrator | 2025-05-26 04:57:35.758795 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-26 04:57:35.758800 | orchestrator | Monday 26 May 2025 04:56:49 +0000 (0:00:01.038) 0:05:43.766 ************ 2025-05-26 04:57:35.758804 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.758809 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.758813 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.758818 | orchestrator | 2025-05-26 04:57:35.758822 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-26 04:57:35.758827 | orchestrator | Monday 26 May 2025 04:56:50 +0000 (0:00:00.740) 0:05:44.507 ************ 2025-05-26 04:57:35.758831 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.758836 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.758840 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.758845 | orchestrator | 2025-05-26 04:57:35.758849 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-26 04:57:35.758854 | orchestrator | Monday 26 May 2025 04:56:51 +0000 (0:00:00.382) 0:05:44.890 ************ 2025-05-26 04:57:35.758858 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.758863 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.758867 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.758872 | orchestrator | 2025-05-26 04:57:35.758876 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-26 04:57:35.758881 | orchestrator | Monday 26 May 2025 04:56:51 +0000 (0:00:00.891) 0:05:45.781 ************ 2025-05-26 04:57:35.758885 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.758890 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.758897 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.758902 | orchestrator | 2025-05-26 04:57:35.758907 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-26 04:57:35.758911 | orchestrator | Monday 26 May 2025 04:56:53 +0000 (0:00:01.354) 0:05:47.136 ************ 2025-05-26 04:57:35.758916 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.758920 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.758924 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.758929 | orchestrator | 2025-05-26 04:57:35.758933 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-26 04:57:35.758938 | orchestrator | Monday 26 May 2025 04:56:54 +0000 (0:00:00.932) 0:05:48.068 ************ 2025-05-26 04:57:35.758942 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.758947 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.758951 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.758956 | orchestrator | 2025-05-26 04:57:35.758960 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-26 04:57:35.758965 | orchestrator | Monday 26 May 2025 04:57:03 +0000 (0:00:09.605) 0:05:57.673 ************ 2025-05-26 04:57:35.758969 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.758974 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.758978 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.758983 | orchestrator | 2025-05-26 04:57:35.758987 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-26 04:57:35.758992 | orchestrator | Monday 26 May 2025 04:57:04 +0000 (0:00:00.755) 0:05:58.429 ************ 2025-05-26 04:57:35.759010 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.759015 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.759019 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.759024 | orchestrator | 2025-05-26 04:57:35.759028 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-26 04:57:35.759037 | orchestrator | Monday 26 May 2025 04:57:14 +0000 (0:00:09.764) 0:06:08.193 ************ 2025-05-26 04:57:35.759041 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.759046 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.759050 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.759055 | orchestrator | 2025-05-26 04:57:35.759059 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-26 04:57:35.759064 | orchestrator | Monday 26 May 2025 04:57:18 +0000 (0:00:03.876) 0:06:12.069 ************ 2025-05-26 04:57:35.759068 | orchestrator | changed: [testbed-node-0] 2025-05-26 04:57:35.759073 | orchestrator | changed: [testbed-node-1] 2025-05-26 04:57:35.759077 | orchestrator | changed: [testbed-node-2] 2025-05-26 04:57:35.759082 | orchestrator | 2025-05-26 04:57:35.759086 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-26 04:57:35.759091 | orchestrator | Monday 26 May 2025 04:57:23 +0000 (0:00:05.666) 0:06:17.736 ************ 2025-05-26 04:57:35.759095 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.759102 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.759107 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.759112 | orchestrator | 2025-05-26 04:57:35.759116 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-26 04:57:35.759121 | orchestrator | Monday 26 May 2025 04:57:24 +0000 (0:00:00.382) 0:06:18.118 ************ 2025-05-26 04:57:35.759125 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.759130 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.759134 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.759139 | orchestrator | 2025-05-26 04:57:35.759143 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-26 04:57:35.759148 | orchestrator | Monday 26 May 2025 04:57:24 +0000 (0:00:00.676) 0:06:18.795 ************ 2025-05-26 04:57:35.759152 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.759157 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.759161 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.759166 | orchestrator | 2025-05-26 04:57:35.759170 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-26 04:57:35.759175 | orchestrator | Monday 26 May 2025 04:57:25 +0000 (0:00:00.321) 0:06:19.117 ************ 2025-05-26 04:57:35.759179 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.759184 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.759188 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.759192 | orchestrator | 2025-05-26 04:57:35.759197 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-26 04:57:35.759202 | orchestrator | Monday 26 May 2025 04:57:25 +0000 (0:00:00.347) 0:06:19.464 ************ 2025-05-26 04:57:35.759206 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.759211 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.759215 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.759219 | orchestrator | 2025-05-26 04:57:35.759224 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-26 04:57:35.759228 | orchestrator | Monday 26 May 2025 04:57:25 +0000 (0:00:00.328) 0:06:19.793 ************ 2025-05-26 04:57:35.759233 | orchestrator | skipping: [testbed-node-0] 2025-05-26 04:57:35.759237 | orchestrator | skipping: [testbed-node-1] 2025-05-26 04:57:35.759242 | orchestrator | skipping: [testbed-node-2] 2025-05-26 04:57:35.759246 | orchestrator | 2025-05-26 04:57:35.759251 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-26 04:57:35.759255 | orchestrator | Monday 26 May 2025 04:57:26 +0000 (0:00:00.696) 0:06:20.490 ************ 2025-05-26 04:57:35.759260 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.759264 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.759269 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.759273 | orchestrator | 2025-05-26 04:57:35.759278 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-26 04:57:35.759282 | orchestrator | Monday 26 May 2025 04:57:31 +0000 (0:00:04.740) 0:06:25.231 ************ 2025-05-26 04:57:35.759290 | orchestrator | ok: [testbed-node-0] 2025-05-26 04:57:35.759295 | orchestrator | ok: [testbed-node-1] 2025-05-26 04:57:35.759361 | orchestrator | ok: [testbed-node-2] 2025-05-26 04:57:35.759367 | orchestrator | 2025-05-26 04:57:35.759371 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-26 04:57:35.759376 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-26 04:57:35.759384 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-26 04:57:35.759389 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-26 04:57:35.759394 | orchestrator | 2025-05-26 04:57:35.759398 | orchestrator | 2025-05-26 04:57:35.759403 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-26 04:57:35.759407 | orchestrator | Monday 26 May 2025 04:57:32 +0000 (0:00:00.819) 0:06:26.051 ************ 2025-05-26 04:57:35.759412 | orchestrator | =============================================================================== 2025-05-26 04:57:35.759416 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.76s 2025-05-26 04:57:35.759421 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.61s 2025-05-26 04:57:35.759425 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.36s 2025-05-26 04:57:35.759430 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.67s 2025-05-26 04:57:35.759434 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.48s 2025-05-26 04:57:35.759439 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.24s 2025-05-26 04:57:35.759443 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.92s 2025-05-26 04:57:35.759448 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.74s 2025-05-26 04:57:35.759452 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.72s 2025-05-26 04:57:35.759457 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.52s 2025-05-26 04:57:35.759461 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.43s 2025-05-26 04:57:35.759466 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.35s 2025-05-26 04:57:35.759470 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.29s 2025-05-26 04:57:35.759474 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.20s 2025-05-26 04:57:35.759479 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.18s 2025-05-26 04:57:35.759483 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.15s 2025-05-26 04:57:35.759491 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.14s 2025-05-26 04:57:35.759496 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.03s 2025-05-26 04:57:35.759500 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.88s 2025-05-26 04:57:35.759505 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.85s 2025-05-26 04:57:35.759509 | orchestrator | 2025-05-26 04:57:35 | INFO  | Task defb43e6-5997-4ed7-8a35-c1ce898db019 is in state STARTED 2025-05-26 07:01:00.340831 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-26 07:01:00.342616 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-26 07:01:01.123907 | 2025-05-26 07:01:01.124082 | PLAY [Post output play] 2025-05-26 07:01:01.141296 | 2025-05-26 07:01:01.141497 | LOOP [stage-output : Register sources] 2025-05-26 07:01:01.205339 | 2025-05-26 07:01:01.205680 | TASK [stage-output : Check sudo] 2025-05-26 07:01:02.243599 | orchestrator | sudo: a password is required 2025-05-26 07:01:02.748037 | orchestrator | ok: Runtime: 0:00:00.167193 2025-05-26 07:01:02.763615 | 2025-05-26 07:01:02.763805 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-26 07:01:02.806027 | 2025-05-26 07:01:02.806336 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-26 07:01:02.885775 | orchestrator | ok 2025-05-26 07:01:02.901546 | 2025-05-26 07:01:02.901952 | LOOP [stage-output : Ensure target folders exist] 2025-05-26 07:01:03.382403 | orchestrator | ok: "docs" 2025-05-26 07:01:03.382812 | 2025-05-26 07:01:03.641712 | orchestrator | ok: "artifacts" 2025-05-26 07:01:03.890495 | orchestrator | ok: "logs" 2025-05-26 07:01:03.903148 | 2025-05-26 07:01:03.903307 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-26 07:01:03.939748 | 2025-05-26 07:01:03.940010 | TASK [stage-output : Make all log files readable] 2025-05-26 07:01:04.265146 | orchestrator | ok 2025-05-26 07:01:04.275951 | 2025-05-26 07:01:04.276103 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-26 07:01:04.310975 | orchestrator | skipping: Conditional result was False 2025-05-26 07:01:04.326534 | 2025-05-26 07:01:04.326694 | TASK [stage-output : Discover log files for compression] 2025-05-26 07:01:04.351662 | orchestrator | skipping: Conditional result was False 2025-05-26 07:01:04.368037 | 2025-05-26 07:01:04.368192 | LOOP [stage-output : Archive everything from logs] 2025-05-26 07:01:04.415285 | 2025-05-26 07:01:04.415515 | PLAY [Post cleanup play] 2025-05-26 07:01:04.424061 | 2025-05-26 07:01:04.424170 | TASK [Set cloud fact (Zuul deployment)] 2025-05-26 07:01:04.482828 | orchestrator | ok 2025-05-26 07:01:04.494944 | 2025-05-26 07:01:04.495088 | TASK [Set cloud fact (local deployment)] 2025-05-26 07:01:04.519115 | orchestrator | skipping: Conditional result was False 2025-05-26 07:01:04.529419 | 2025-05-26 07:01:04.529562 | TASK [Clean the cloud environment] 2025-05-26 07:01:07.131014 | orchestrator | 2025-05-26 07:01:07 - clean up servers 2025-05-26 07:01:07.896973 | orchestrator | 2025-05-26 07:01:07 - testbed-manager 2025-05-26 07:01:07.978311 | orchestrator | 2025-05-26 07:01:07 - testbed-node-5 2025-05-26 07:01:08.061361 | orchestrator | 2025-05-26 07:01:08 - testbed-node-3 2025-05-26 07:01:08.155164 | orchestrator | 2025-05-26 07:01:08 - testbed-node-0 2025-05-26 07:01:08.240758 | orchestrator | 2025-05-26 07:01:08 - testbed-node-4 2025-05-26 07:01:08.333152 | orchestrator | 2025-05-26 07:01:08 - testbed-node-1 2025-05-26 07:01:08.419244 | orchestrator | 2025-05-26 07:01:08 - testbed-node-2 2025-05-26 07:01:08.517036 | orchestrator | 2025-05-26 07:01:08 - clean up keypairs 2025-05-26 07:01:08.535294 | orchestrator | 2025-05-26 07:01:08 - testbed 2025-05-26 07:01:08.561362 | orchestrator | 2025-05-26 07:01:08 - wait for servers to be gone 2025-05-26 07:01:21.595956 | orchestrator | 2025-05-26 07:01:21 - clean up ports 2025-05-26 07:01:21.794936 | orchestrator | 2025-05-26 07:01:21 - 40adfb47-bcbe-419e-905d-88868e72c211 2025-05-26 07:01:22.027298 | orchestrator | 2025-05-26 07:01:22 - 6dc42444-4903-4524-a5ef-27412fb8d4ff 2025-05-26 07:01:22.267206 | orchestrator | 2025-05-26 07:01:22 - 8206a756-e71e-4efc-8782-1fa95da42dd8 2025-05-26 07:01:22.467959 | orchestrator | 2025-05-26 07:01:22 - 949929a6-29e4-45b2-9258-aa7109a063ee 2025-05-26 07:01:22.676918 | orchestrator | 2025-05-26 07:01:22 - b0f36336-735b-4f2c-84c1-f7da79536853 2025-05-26 07:01:23.082851 | orchestrator | 2025-05-26 07:01:23 - d81cd463-9f68-4db7-a5be-167414a57780 2025-05-26 07:01:23.294893 | orchestrator | 2025-05-26 07:01:23 - e69becee-ae83-4b9f-9cba-4b283765889f 2025-05-26 07:01:23.572563 | orchestrator | 2025-05-26 07:01:23 - clean up volumes 2025-05-26 07:01:23.702281 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-manager-base 2025-05-26 07:01:23.745835 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-1-node-base 2025-05-26 07:01:23.787900 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-2-node-base 2025-05-26 07:01:23.831188 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-4-node-base 2025-05-26 07:01:23.871009 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-5-node-base 2025-05-26 07:01:23.910384 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-3-node-base 2025-05-26 07:01:23.952260 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-0-node-base 2025-05-26 07:01:23.996130 | orchestrator | 2025-05-26 07:01:23 - testbed-volume-6-node-3 2025-05-26 07:01:24.041785 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-0-node-3 2025-05-26 07:01:24.081568 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-3-node-3 2025-05-26 07:01:24.126932 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-1-node-4 2025-05-26 07:01:24.167335 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-8-node-5 2025-05-26 07:01:24.211119 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-7-node-4 2025-05-26 07:01:24.249487 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-2-node-5 2025-05-26 07:01:24.292468 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-4-node-4 2025-05-26 07:01:24.334403 | orchestrator | 2025-05-26 07:01:24 - testbed-volume-5-node-5 2025-05-26 07:01:24.375607 | orchestrator | 2025-05-26 07:01:24 - disconnect routers 2025-05-26 07:01:24.531937 | orchestrator | 2025-05-26 07:01:24 - testbed 2025-05-26 07:01:25.570530 | orchestrator | 2025-05-26 07:01:25 - clean up subnets 2025-05-26 07:01:25.627435 | orchestrator | 2025-05-26 07:01:25 - subnet-testbed-management 2025-05-26 07:01:25.825876 | orchestrator | 2025-05-26 07:01:25 - clean up networks 2025-05-26 07:01:26.003236 | orchestrator | 2025-05-26 07:01:26 - net-testbed-management 2025-05-26 07:01:26.290762 | orchestrator | 2025-05-26 07:01:26 - clean up security groups 2025-05-26 07:01:26.332909 | orchestrator | 2025-05-26 07:01:26 - testbed-management 2025-05-26 07:01:26.459242 | orchestrator | 2025-05-26 07:01:26 - testbed-node 2025-05-26 07:01:26.580420 | orchestrator | 2025-05-26 07:01:26 - clean up floating ips 2025-05-26 07:01:26.614466 | orchestrator | 2025-05-26 07:01:26 - 81.163.192.90 2025-05-26 07:01:26.968470 | orchestrator | 2025-05-26 07:01:26 - clean up routers 2025-05-26 07:01:27.080420 | orchestrator | 2025-05-26 07:01:27 - testbed 2025-05-26 07:01:28.085490 | orchestrator | ok: Runtime: 0:00:23.045646 2025-05-26 07:01:28.090486 | 2025-05-26 07:01:28.090690 | PLAY RECAP 2025-05-26 07:01:28.090906 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-26 07:01:28.090996 | 2025-05-26 07:01:28.243732 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-26 07:01:28.244748 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-26 07:01:28.966510 | 2025-05-26 07:01:28.966678 | PLAY [Cleanup play] 2025-05-26 07:01:28.982467 | 2025-05-26 07:01:28.982604 | TASK [Set cloud fact (Zuul deployment)] 2025-05-26 07:01:29.033510 | orchestrator | ok 2025-05-26 07:01:29.040567 | 2025-05-26 07:01:29.040719 | TASK [Set cloud fact (local deployment)] 2025-05-26 07:01:29.075616 | orchestrator | skipping: Conditional result was False 2025-05-26 07:01:29.095041 | 2025-05-26 07:01:29.095268 | TASK [Clean the cloud environment] 2025-05-26 07:01:30.242369 | orchestrator | 2025-05-26 07:01:30 - clean up servers 2025-05-26 07:01:30.716371 | orchestrator | 2025-05-26 07:01:30 - clean up keypairs 2025-05-26 07:01:30.735343 | orchestrator | 2025-05-26 07:01:30 - wait for servers to be gone 2025-05-26 07:01:30.778991 | orchestrator | 2025-05-26 07:01:30 - clean up ports 2025-05-26 07:01:30.870684 | orchestrator | 2025-05-26 07:01:30 - clean up volumes 2025-05-26 07:01:30.941522 | orchestrator | 2025-05-26 07:01:30 - disconnect routers 2025-05-26 07:01:30.963086 | orchestrator | 2025-05-26 07:01:30 - clean up subnets 2025-05-26 07:01:30.981052 | orchestrator | 2025-05-26 07:01:30 - clean up networks 2025-05-26 07:01:31.573755 | orchestrator | 2025-05-26 07:01:31 - clean up security groups 2025-05-26 07:01:31.609408 | orchestrator | 2025-05-26 07:01:31 - clean up floating ips 2025-05-26 07:01:31.632618 | orchestrator | 2025-05-26 07:01:31 - clean up routers 2025-05-26 07:01:32.138938 | orchestrator | ok: Runtime: 0:00:01.791667 2025-05-26 07:01:32.144003 | 2025-05-26 07:01:32.144179 | PLAY RECAP 2025-05-26 07:01:32.144343 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-26 07:01:32.144446 | 2025-05-26 07:01:32.320122 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-26 07:01:32.322803 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-26 07:01:33.092103 | 2025-05-26 07:01:33.092275 | PLAY [Base post-fetch] 2025-05-26 07:01:33.107969 | 2025-05-26 07:01:33.108106 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-26 07:01:33.163295 | orchestrator | skipping: Conditional result was False 2025-05-26 07:01:33.175296 | 2025-05-26 07:01:33.175524 | TASK [fetch-output : Set log path for single node] 2025-05-26 07:01:33.235003 | orchestrator | ok 2025-05-26 07:01:33.243985 | 2025-05-26 07:01:33.244145 | LOOP [fetch-output : Ensure local output dirs] 2025-05-26 07:01:33.732909 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/work/logs" 2025-05-26 07:01:34.004133 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/work/artifacts" 2025-05-26 07:01:34.275949 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/83c4ae87e4be4185b05ca966758d4263/work/docs" 2025-05-26 07:01:34.291581 | 2025-05-26 07:01:34.291805 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-26 07:01:35.347352 | orchestrator | changed: .d..t...... ./ 2025-05-26 07:01:35.347675 | orchestrator | changed: All items complete 2025-05-26 07:01:35.347723 | 2025-05-26 07:01:36.150435 | orchestrator | changed: .d..t...... ./ 2025-05-26 07:01:36.895969 | orchestrator | changed: .d..t...... ./ 2025-05-26 07:01:36.927590 | 2025-05-26 07:01:36.927765 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-26 07:01:36.965414 | orchestrator | skipping: Conditional result was False 2025-05-26 07:01:36.968090 | orchestrator | skipping: Conditional result was False 2025-05-26 07:01:36.985426 | 2025-05-26 07:01:36.985890 | PLAY RECAP 2025-05-26 07:01:36.985951 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-26 07:01:36.985979 | 2025-05-26 07:01:37.129114 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-26 07:01:37.130756 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-26 07:01:37.960953 | 2025-05-26 07:01:37.961212 | PLAY [Base post] 2025-05-26 07:01:37.977051 | 2025-05-26 07:01:37.977218 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-26 07:01:39.607617 | orchestrator | changed 2025-05-26 07:01:39.616412 | 2025-05-26 07:01:39.616534 | PLAY RECAP 2025-05-26 07:01:39.616642 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-26 07:01:39.616714 | 2025-05-26 07:01:39.751830 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-26 07:01:39.752859 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-26 07:01:40.622600 | 2025-05-26 07:01:40.622791 | PLAY [Base post-logs] 2025-05-26 07:01:40.633770 | 2025-05-26 07:01:40.633922 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-26 07:01:41.129191 | localhost | changed 2025-05-26 07:01:41.143514 | 2025-05-26 07:01:41.143822 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-26 07:01:41.172369 | localhost | ok 2025-05-26 07:01:41.176005 | 2025-05-26 07:01:41.176115 | TASK [Set zuul-log-path fact] 2025-05-26 07:01:41.191372 | localhost | ok 2025-05-26 07:01:41.200093 | 2025-05-26 07:01:41.200224 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-26 07:01:41.225430 | localhost | ok 2025-05-26 07:01:41.228579 | 2025-05-26 07:01:41.228712 | TASK [upload-logs : Create log directories] 2025-05-26 07:01:41.769440 | localhost | changed 2025-05-26 07:01:41.773383 | 2025-05-26 07:01:41.773521 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-26 07:01:42.287152 | localhost -> localhost | ok: Runtime: 0:00:00.006907 2025-05-26 07:01:42.291518 | 2025-05-26 07:01:42.291646 | TASK [upload-logs : Upload logs to log server] 2025-05-26 07:01:42.940479 | localhost | Output suppressed because no_log was given 2025-05-26 07:01:42.944740 | 2025-05-26 07:01:42.944927 | LOOP [upload-logs : Compress console log and json output] 2025-05-26 07:01:42.995114 | localhost | skipping: Conditional result was False 2025-05-26 07:01:43.002418 | localhost | skipping: Conditional result was False 2025-05-26 07:01:43.010828 | 2025-05-26 07:01:43.011072 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-26 07:01:43.069580 | localhost | skipping: Conditional result was False 2025-05-26 07:01:43.069885 | 2025-05-26 07:01:43.074957 | localhost | skipping: Conditional result was False 2025-05-26 07:01:43.084463 | 2025-05-26 07:01:43.084619 | LOOP [upload-logs : Upload console log and json output]