2025-04-09 08:27:54.890864 | Job console starting... 2025-04-09 08:27:54.905020 | Updating repositories 2025-04-09 08:27:54.987222 | Preparing job workspace 2025-04-09 08:27:56.699395 | Running Ansible setup... 2025-04-09 08:28:01.365242 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-09 08:28:01.997400 | 2025-04-09 08:28:01.997524 | PLAY [Base pre] 2025-04-09 08:28:02.027158 | 2025-04-09 08:28:02.027286 | TASK [Setup log path fact] 2025-04-09 08:28:02.065351 | orchestrator | ok 2025-04-09 08:28:02.084091 | 2025-04-09 08:28:02.084202 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-09 08:28:02.124859 | orchestrator | ok 2025-04-09 08:28:02.140094 | 2025-04-09 08:28:02.140189 | TASK [emit-job-header : Print job information] 2025-04-09 08:28:02.191536 | # Job Information 2025-04-09 08:28:02.191718 | Ansible Version: 2.15.3 2025-04-09 08:28:02.191753 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-04-09 08:28:02.191783 | Pipeline: post 2025-04-09 08:28:02.191804 | Executor: 7d211f194f6a 2025-04-09 08:28:02.191824 | Triggered by: https://github.com/osism/testbed/commit/079295ce6bb0d8d3f7dcff627d424084f67733d5 2025-04-09 08:28:02.191843 | Event ID: 84f3b1f8-151c-11f0-84e1-5a82368dbdb3 2025-04-09 08:28:02.199040 | 2025-04-09 08:28:02.199146 | LOOP [emit-job-header : Print node information] 2025-04-09 08:28:02.341767 | orchestrator | ok: 2025-04-09 08:28:02.341952 | orchestrator | # Node Information 2025-04-09 08:28:02.341986 | orchestrator | Inventory Hostname: orchestrator 2025-04-09 08:28:02.342010 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-09 08:28:02.342031 | orchestrator | Username: zuul-testbed05 2025-04-09 08:28:02.342050 | orchestrator | Distro: Debian 12.10 2025-04-09 08:28:02.342073 | orchestrator | Provider: static-testbed 2025-04-09 08:28:02.342093 | orchestrator | Label: testbed-orchestrator 2025-04-09 08:28:02.342112 | orchestrator | Product Name: OpenStack Nova 2025-04-09 08:28:02.342132 | orchestrator | Interface IP: 81.163.193.140 2025-04-09 08:28:02.370601 | 2025-04-09 08:28:02.370760 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-09 08:28:02.840268 | orchestrator -> localhost | changed 2025-04-09 08:28:02.856557 | 2025-04-09 08:28:02.856731 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-09 08:28:03.880837 | orchestrator -> localhost | changed 2025-04-09 08:28:03.897424 | 2025-04-09 08:28:03.897554 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-09 08:28:04.174113 | orchestrator -> localhost | ok 2025-04-09 08:28:04.189108 | 2025-04-09 08:28:04.189270 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-09 08:28:04.241279 | orchestrator | ok 2025-04-09 08:28:04.260558 | orchestrator | included: /var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-09 08:28:04.269365 | 2025-04-09 08:28:04.269465 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-09 08:28:04.863063 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-09 08:28:04.863510 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/work/0c6e366059cb4f5fa89c10fa6f1e315d_id_rsa 2025-04-09 08:28:04.863612 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/work/0c6e366059cb4f5fa89c10fa6f1e315d_id_rsa.pub 2025-04-09 08:28:04.863711 | orchestrator -> localhost | The key fingerprint is: 2025-04-09 08:28:04.863780 | orchestrator -> localhost | SHA256:pLIB60ZRalty6G7Yo3DbhxFExHg4DMyrFh9KlLgIIzc zuul-build-sshkey 2025-04-09 08:28:04.863841 | orchestrator -> localhost | The key's randomart image is: 2025-04-09 08:28:04.863902 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-09 08:28:04.863962 | orchestrator -> localhost | |=o.*+ | 2025-04-09 08:28:04.864020 | orchestrator -> localhost | |==E++ | 2025-04-09 08:28:04.864099 | orchestrator -> localhost | |=+O=o . | 2025-04-09 08:28:04.864161 | orchestrator -> localhost | |o* X. o | 2025-04-09 08:28:04.864218 | orchestrator -> localhost | |o O +.. S | 2025-04-09 08:28:04.864276 | orchestrator -> localhost | |.O ..+ | 2025-04-09 08:28:04.864348 | orchestrator -> localhost | |+ O .o | 2025-04-09 08:28:04.864444 | orchestrator -> localhost | |.= +. . | 2025-04-09 08:28:04.864504 | orchestrator -> localhost | |. . .. | 2025-04-09 08:28:04.864560 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-09 08:28:04.864717 | orchestrator -> localhost | ok: Runtime: 0:00:00.098467 2025-04-09 08:28:04.881647 | 2025-04-09 08:28:04.881849 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-09 08:28:04.928942 | orchestrator | ok 2025-04-09 08:28:04.946581 | orchestrator | included: /var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-09 08:28:04.962275 | 2025-04-09 08:28:04.962388 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-09 08:28:04.988212 | orchestrator | skipping: Conditional result was False 2025-04-09 08:28:04.997417 | 2025-04-09 08:28:04.997525 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-09 08:28:05.597284 | orchestrator | changed 2025-04-09 08:28:05.607687 | 2025-04-09 08:28:05.607800 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-09 08:28:05.870336 | orchestrator | ok 2025-04-09 08:28:05.879954 | 2025-04-09 08:28:05.880048 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-09 08:28:06.276560 | orchestrator | ok 2025-04-09 08:28:06.285644 | 2025-04-09 08:28:06.285748 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-09 08:28:06.661460 | orchestrator | ok 2025-04-09 08:28:06.671258 | 2025-04-09 08:28:06.671351 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-09 08:28:06.695256 | orchestrator | skipping: Conditional result was False 2025-04-09 08:28:06.706291 | 2025-04-09 08:28:06.706394 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-09 08:28:07.135026 | orchestrator -> localhost | changed 2025-04-09 08:28:07.159475 | 2025-04-09 08:28:07.159610 | TASK [add-build-sshkey : Add back temp key] 2025-04-09 08:28:07.500079 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/work/0c6e366059cb4f5fa89c10fa6f1e315d_id_rsa (zuul-build-sshkey) 2025-04-09 08:28:07.500306 | orchestrator -> localhost | ok: Runtime: 0:00:00.010587 2025-04-09 08:28:07.509088 | 2025-04-09 08:28:07.509202 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-09 08:28:07.845867 | orchestrator | ok 2025-04-09 08:28:07.853409 | 2025-04-09 08:28:07.853519 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-09 08:28:07.887766 | orchestrator | skipping: Conditional result was False 2025-04-09 08:28:07.902541 | 2025-04-09 08:28:07.902644 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-09 08:28:08.271843 | orchestrator | ok 2025-04-09 08:28:08.289799 | 2025-04-09 08:28:08.289977 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-09 08:28:08.335782 | orchestrator | ok 2025-04-09 08:28:08.345198 | 2025-04-09 08:28:08.345312 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-09 08:28:08.645852 | orchestrator -> localhost | ok 2025-04-09 08:28:08.663532 | 2025-04-09 08:28:08.663750 | TASK [validate-host : Collect information about the host] 2025-04-09 08:28:09.759259 | orchestrator | ok 2025-04-09 08:28:09.776008 | 2025-04-09 08:28:09.776125 | TASK [validate-host : Sanitize hostname] 2025-04-09 08:28:09.855674 | orchestrator | ok 2025-04-09 08:28:09.865587 | 2025-04-09 08:28:09.865750 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-09 08:28:10.452346 | orchestrator -> localhost | changed 2025-04-09 08:28:10.466409 | 2025-04-09 08:28:10.466561 | TASK [validate-host : Collect information about zuul worker] 2025-04-09 08:28:10.979425 | orchestrator | ok 2025-04-09 08:28:10.989441 | 2025-04-09 08:28:10.989580 | TASK [validate-host : Write out all zuul information for each host] 2025-04-09 08:28:11.535052 | orchestrator -> localhost | changed 2025-04-09 08:28:11.549157 | 2025-04-09 08:28:11.549274 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-09 08:28:11.817045 | orchestrator | ok 2025-04-09 08:28:11.825650 | 2025-04-09 08:28:11.825782 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-09 08:28:55.030483 | orchestrator | changed: 2025-04-09 08:28:55.030791 | orchestrator | .d..t...... src/ 2025-04-09 08:28:55.031434 | orchestrator | .d..t...... src/github.com/ 2025-04-09 08:28:55.031476 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-09 08:28:55.031504 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-09 08:28:55.031530 | orchestrator | RedHat.yml 2025-04-09 08:28:55.046976 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-09 08:28:55.046993 | orchestrator | RedHat.yml 2025-04-09 08:28:55.047045 | orchestrator | = 2.2.0"... 2025-04-09 08:29:07.223387 | orchestrator | 08:29:07.222 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-04-09 08:29:07.298978 | orchestrator | 08:29:07.298 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-04-09 08:29:08.062617 | orchestrator | 08:29:08.062 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-09 08:29:08.930673 | orchestrator | 08:29:08.930 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-09 08:29:09.552573 | orchestrator | 08:29:09.552 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-09 08:29:10.613455 | orchestrator | 08:29:10.613 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-09 08:29:11.592089 | orchestrator | 08:29:11.591 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-09 08:29:12.437239 | orchestrator | 08:29:12.437 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-09 08:29:12.437300 | orchestrator | 08:29:12.437 STDOUT terraform: Providers are signed by their developers. 2025-04-09 08:29:12.437309 | orchestrator | 08:29:12.437 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-09 08:29:12.437346 | orchestrator | 08:29:12.437 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-09 08:29:12.437355 | orchestrator | 08:29:12.437 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-09 08:29:12.437426 | orchestrator | 08:29:12.437 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-09 08:29:12.437482 | orchestrator | 08:29:12.437 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-09 08:29:12.437506 | orchestrator | 08:29:12.437 STDOUT terraform: you run "tofu init" in the future. 2025-04-09 08:29:12.437708 | orchestrator | 08:29:12.437 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-09 08:29:12.437721 | orchestrator | 08:29:12.437 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-09 08:29:12.437804 | orchestrator | 08:29:12.437 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-09 08:29:12.437854 | orchestrator | 08:29:12.437 STDOUT terraform: should now work. 2025-04-09 08:29:12.437867 | orchestrator | 08:29:12.437 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-09 08:29:12.437909 | orchestrator | 08:29:12.437 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-09 08:29:12.437958 | orchestrator | 08:29:12.437 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-09 08:29:12.615684 | orchestrator | 08:29:12.615 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-04-09 08:29:12.786159 | orchestrator | 08:29:12.785 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-09 08:29:12.786219 | orchestrator | 08:29:12.786 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-09 08:29:12.786229 | orchestrator | 08:29:12.786 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-09 08:29:12.786238 | orchestrator | 08:29:12.786 STDOUT terraform: for this configuration. 2025-04-09 08:29:13.003958 | orchestrator | 08:29:13.003 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-04-09 08:29:13.109729 | orchestrator | 08:29:13.109 STDOUT terraform: ci.auto.tfvars 2025-04-09 08:29:14.100784 | orchestrator | 08:29:14.100 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-04-09 08:29:14.902480 | orchestrator | 08:29:14.901 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-09 08:29:15.426854 | orchestrator | 08:29:15.426 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-09 08:29:15.667234 | orchestrator | 08:29:15.667 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-09 08:29:15.667315 | orchestrator | 08:29:15.667 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-09 08:29:15.667325 | orchestrator | 08:29:15.667 STDOUT terraform:  + create 2025-04-09 08:29:15.667340 | orchestrator | 08:29:15.667 STDOUT terraform:  <= read (data resources) 2025-04-09 08:29:15.670083 | orchestrator | 08:29:15.667 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-09 08:29:15.670125 | orchestrator | 08:29:15.667 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-09 08:29:15.670132 | orchestrator | 08:29:15.667 STDOUT terraform:  # (config refers to values not yet known) 2025-04-09 08:29:15.670138 | orchestrator | 08:29:15.667 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-09 08:29:15.670144 | orchestrator | 08:29:15.667 STDOUT terraform:  + checksum = (known after apply) 2025-04-09 08:29:15.670149 | orchestrator | 08:29:15.667 STDOUT terraform:  + created_at = (known after apply) 2025-04-09 08:29:15.670155 | orchestrator | 08:29:15.667 STDOUT terraform:  + file = (known after apply) 2025-04-09 08:29:15.670160 | orchestrator | 08:29:15.667 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.670165 | orchestrator | 08:29:15.667 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.670169 | orchestrator | 08:29:15.667 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-09 08:29:15.670174 | orchestrator | 08:29:15.667 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-09 08:29:15.670186 | orchestrator | 08:29:15.667 STDOUT terraform:  + most_recent = true 2025-04-09 08:29:15.670191 | orchestrator | 08:29:15.667 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.670212 | orchestrator | 08:29:15.668 STDOUT terraform:  + protected = (known after apply) 2025-04-09 08:29:15.670217 | orchestrator | 08:29:15.668 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.670222 | orchestrator | 08:29:15.668 STDOUT terraform:  + schema = (known after apply) 2025-04-09 08:29:15.670227 | orchestrator | 08:29:15.668 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-09 08:29:15.670234 | orchestrator | 08:29:15.668 STDOUT terraform:  + tags = (known after apply) 2025-04-09 08:29:15.670244 | orchestrator | 08:29:15.668 STDOUT terraform:  + updated_at = (known after apply) 2025-04-09 08:29:15.670250 | orchestrator | 08:29:15.668 STDOUT terraform:  } 2025-04-09 08:29:15.670257 | orchestrator | 08:29:15.668 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-09 08:29:15.670262 | orchestrator | 08:29:15.668 STDOUT terraform:  # (config refers to values not yet known) 2025-04-09 08:29:15.670267 | orchestrator | 08:29:15.668 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-09 08:29:15.670275 | orchestrator | 08:29:15.668 STDOUT terraform:  + checksum = (known after apply) 2025-04-09 08:29:15.670280 | orchestrator | 08:29:15.668 STDOUT terraform:  + created_at = (known after apply) 2025-04-09 08:29:15.670285 | orchestrator | 08:29:15.668 STDOUT terraform:  + file = (known after apply) 2025-04-09 08:29:15.670290 | orchestrator | 08:29:15.668 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.670295 | orchestrator | 08:29:15.668 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.670299 | orchestrator | 08:29:15.668 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-09 08:29:15.670304 | orchestrator | 08:29:15.668 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-09 08:29:15.670309 | orchestrator | 08:29:15.668 STDOUT terraform:  + most_recent = true 2025-04-09 08:29:15.670315 | orchestrator | 08:29:15.668 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.670319 | orchestrator | 08:29:15.668 STDOUT terraform:  + protected = (known after apply) 2025-04-09 08:29:15.670324 | orchestrator | 08:29:15.668 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.670329 | orchestrator | 08:29:15.669 STDOUT terraform:  + schema = (known after apply) 2025-04-09 08:29:15.670335 | orchestrator | 08:29:15.669 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-09 08:29:15.670340 | orchestrator | 08:29:15.669 STDOUT terraform:  + tags = (known after apply) 2025-04-09 08:29:15.670345 | orchestrator | 08:29:15.669 STDOUT terraform:  + updated_at = (known after apply) 2025-04-09 08:29:15.670356 | orchestrator | 08:29:15.669 STDOUT terraform:  } 2025-04-09 08:29:15.671345 | orchestrator | 08:29:15.669 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-09 08:29:15.671365 | orchestrator | 08:29:15.669 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-09 08:29:15.671371 | orchestrator | 08:29:15.669 STDOUT terraform:  + content = (known after apply) 2025-04-09 08:29:15.671377 | orchestrator | 08:29:15.669 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-09 08:29:15.671392 | orchestrator | 08:29:15.669 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-09 08:29:15.671398 | orchestrator | 08:29:15.669 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-09 08:29:15.671406 | orchestrator | 08:29:15.669 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-09 08:29:15.671412 | orchestrator | 08:29:15.669 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-09 08:29:15.671417 | orchestrator | 08:29:15.669 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-09 08:29:15.671422 | orchestrator | 08:29:15.669 STDOUT terraform:  + directory_permission = "0777" 2025-04-09 08:29:15.671427 | orchestrator | 08:29:15.669 STDOUT terraform:  + file_permission = "0644" 2025-04-09 08:29:15.671432 | orchestrator | 08:29:15.669 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-09 08:29:15.671438 | orchestrator | 08:29:15.669 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.671444 | orchestrator | 08:29:15.669 STDOUT terraform:  } 2025-04-09 08:29:15.671449 | orchestrator | 08:29:15.669 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-09 08:29:15.671454 | orchestrator | 08:29:15.670 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-09 08:29:15.671459 | orchestrator | 08:29:15.670 STDOUT terraform:  + content = (known after apply) 2025-04-09 08:29:15.671464 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-09 08:29:15.671469 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-09 08:29:15.671474 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-09 08:29:15.671484 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-09 08:29:15.672418 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-09 08:29:15.672430 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-09 08:29:15.672436 | orchestrator | 08:29:15.670 STDOUT terraform:  + directory_permission = "0777" 2025-04-09 08:29:15.672443 | orchestrator | 08:29:15.670 STDOUT terraform:  + file_permission = "0644" 2025-04-09 08:29:15.672448 | orchestrator | 08:29:15.670 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-09 08:29:15.672454 | orchestrator | 08:29:15.670 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.672460 | orchestrator | 08:29:15.670 STDOUT terraform:  } 2025-04-09 08:29:15.672466 | orchestrator | 08:29:15.670 STDOUT terraform:  # local_file.inventory will be created 2025-04-09 08:29:15.672472 | orchestrator | 08:29:15.670 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-09 08:29:15.672477 | orchestrator | 08:29:15.670 STDOUT terraform:  + content = (known after apply) 2025-04-09 08:29:15.672482 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-09 08:29:15.672488 | orchestrator | 08:29:15.670 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-09 08:29:15.672494 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-09 08:29:15.672507 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-09 08:29:15.672512 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-09 08:29:15.672518 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-09 08:29:15.672523 | orchestrator | 08:29:15.671 STDOUT terraform:  + directory_permission = "0777" 2025-04-09 08:29:15.672529 | orchestrator | 08:29:15.671 STDOUT terraform:  + file_permission = "0644" 2025-04-09 08:29:15.672537 | orchestrator | 08:29:15.671 STDOUT terraform:  + filename = "inventory.ci" 2025-04-09 08:29:15.672542 | orchestrator | 08:29:15.671 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.672547 | orchestrator | 08:29:15.671 STDOUT terraform:  } 2025-04-09 08:29:15.672552 | orchestrator | 08:29:15.671 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-09 08:29:15.672561 | orchestrator | 08:29:15.671 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-09 08:29:15.673502 | orchestrator | 08:29:15.671 STDOUT terraform:  + content = (sensitive value) 2025-04-09 08:29:15.673515 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-09 08:29:15.673524 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-09 08:29:15.673530 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-09 08:29:15.673541 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-09 08:29:15.673546 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-09 08:29:15.673551 | orchestrator | 08:29:15.671 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-09 08:29:15.673565 | orchestrator | 08:29:15.671 STDOUT terraform:  + directory_permission = "0700" 2025-04-09 08:29:15.673571 | orchestrator | 08:29:15.671 STDOUT terraform:  + file_permission = "0600" 2025-04-09 08:29:15.673576 | orchestrator | 08:29:15.671 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-09 08:29:15.673581 | orchestrator | 08:29:15.672 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.673586 | orchestrator | 08:29:15.672 STDOUT terraform:  } 2025-04-09 08:29:15.673591 | orchestrator | 08:29:15.672 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-09 08:29:15.673596 | orchestrator | 08:29:15.672 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-09 08:29:15.673601 | orchestrator | 08:29:15.672 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.673607 | orchestrator | 08:29:15.672 STDOUT terraform:  } 2025-04-09 08:29:15.673612 | orchestrator | 08:29:15.672 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-09 08:29:15.673617 | orchestrator | 08:29:15.672 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-09 08:29:15.673622 | orchestrator | 08:29:15.672 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.673627 | orchestrator | 08:29:15.672 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.673639 | orchestrator | 08:29:15.672 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.673644 | orchestrator | 08:29:15.672 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.673653 | orchestrator | 08:29:15.672 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.673694 | orchestrator | 08:29:15.672 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-09 08:29:15.673700 | orchestrator | 08:29:15.672 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.673705 | orchestrator | 08:29:15.672 STDOUT terraform:  + size = 80 2025-04-09 08:29:15.673710 | orchestrator | 08:29:15.672 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.673715 | orchestrator | 08:29:15.672 STDOUT terraform:  } 2025-04-09 08:29:15.673720 | orchestrator | 08:29:15.672 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-09 08:29:15.673725 | orchestrator | 08:29:15.672 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-09 08:29:15.673739 | orchestrator | 08:29:15.672 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.673744 | orchestrator | 08:29:15.672 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.673749 | orchestrator | 08:29:15.672 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.673765 | orchestrator | 08:29:15.673 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.673771 | orchestrator | 08:29:15.673 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.673776 | orchestrator | 08:29:15.673 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-09 08:29:15.673781 | orchestrator | 08:29:15.673 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.673786 | orchestrator | 08:29:15.673 STDOUT terraform:  + size = 80 2025-04-09 08:29:15.673791 | orchestrator | 08:29:15.673 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.673796 | orchestrator | 08:29:15.673 STDOUT terraform:  } 2025-04-09 08:29:15.673801 | orchestrator | 08:29:15.673 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-09 08:29:15.673806 | orchestrator | 08:29:15.673 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-09 08:29:15.673811 | orchestrator | 08:29:15.673 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.673816 | orchestrator | 08:29:15.673 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.673821 | orchestrator | 08:29:15.673 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.673826 | orchestrator | 08:29:15.673 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.673833 | orchestrator | 08:29:15.673 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.674172 | orchestrator | 08:29:15.673 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-09 08:29:15.674198 | orchestrator | 08:29:15.673 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.674213 | orchestrator | 08:29:15.673 STDOUT terraform:  + size = 80 2025-04-09 08:29:15.674222 | orchestrator | 08:29:15.673 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.676371 | orchestrator | 08:29:15.673 STDOUT terraform:  } 2025-04-09 08:29:15.676410 | orchestrator | 08:29:15.673 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-09 08:29:15.676418 | orchestrator | 08:29:15.673 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-09 08:29:15.676429 | orchestrator | 08:29:15.674 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.678057 | orchestrator | 08:29:15.674 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.678095 | orchestrator | 08:29:15.674 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.678102 | orchestrator | 08:29:15.674 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.678107 | orchestrator | 08:29:15.674 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.678112 | orchestrator | 08:29:15.676 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-09 08:29:15.678118 | orchestrator | 08:29:15.676 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.678123 | orchestrator | 08:29:15.676 STDOUT terraform:  + size = 80 2025-04-09 08:29:15.678128 | orchestrator | 08:29:15.676 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.678133 | orchestrator | 08:29:15.676 STDOUT terraform:  } 2025-04-09 08:29:15.678145 | orchestrator | 08:29:15.676 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-09 08:29:15.679216 | orchestrator | 08:29:15.676 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-09 08:29:15.679247 | orchestrator | 08:29:15.676 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.679254 | orchestrator | 08:29:15.676 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.679259 | orchestrator | 08:29:15.676 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.679264 | orchestrator | 08:29:15.676 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.679269 | orchestrator | 08:29:15.676 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.679274 | orchestrator | 08:29:15.676 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-09 08:29:15.679279 | orchestrator | 08:29:15.676 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.679284 | orchestrator | 08:29:15.676 STDOUT terraform:  + size = 80 2025-04-09 08:29:15.679289 | orchestrator | 08:29:15.676 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.679295 | orchestrator | 08:29:15.676 STDOUT terraform:  } 2025-04-09 08:29:15.679301 | orchestrator | 08:29:15.676 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-09 08:29:15.679307 | orchestrator | 08:29:15.677 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-09 08:29:15.679311 | orchestrator | 08:29:15.677 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.679329 | orchestrator | 08:29:15.678 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.679334 | orchestrator | 08:29:15.678 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.679339 | orchestrator | 08:29:15.678 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.679350 | orchestrator | 08:29:15.678 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.679355 | orchestrator | 08:29:15.678 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-09 08:29:15.679360 | orchestrator | 08:29:15.678 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.679365 | orchestrator | 08:29:15.678 STDOUT terraform:  + size = 80 2025-04-09 08:29:15.679370 | orchestrator | 08:29:15.678 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.679375 | orchestrator | 08:29:15.678 STDOUT terraform:  } 2025-04-09 08:29:15.679380 | orchestrator | 08:29:15.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-09 08:29:15.679385 | orchestrator | 08:29:15.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-09 08:29:15.679390 | orchestrator | 08:29:15.678 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.679395 | orchestrator | 08:29:15.678 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.679400 | orchestrator | 08:29:15.678 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.679405 | orchestrator | 08:29:15.678 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.679410 | orchestrator | 08:29:15.678 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.679415 | orchestrator | 08:29:15.678 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-09 08:29:15.679420 | orchestrator | 08:29:15.678 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.679426 | orchestrator | 08:29:15.678 STDOUT terraform:  + size = 80 2025-04-09 08:29:15.679431 | orchestrator | 08:29:15.678 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.679436 | orchestrator | 08:29:15.678 STDOUT terraform:  } 2025-04-09 08:29:15.679441 | orchestrator | 08:29:15.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-09 08:29:15.679446 | orchestrator | 08:29:15.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.679451 | orchestrator | 08:29:15.679 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.679456 | orchestrator | 08:29:15.679 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.679463 | orchestrator | 08:29:15.679 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.679468 | orchestrator | 08:29:15.679 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.679473 | orchestrator | 08:29:15.679 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-09 08:29:15.679478 | orchestrator | 08:29:15.679 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.679484 | orchestrator | 08:29:15.679 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.679492 | orchestrator | 08:29:15.679 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.679499 | orchestrator | 08:29:15.679 STDOUT terraform:  } 2025-04-09 08:29:15.680681 | orchestrator | 08:29:15.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-09 08:29:15.680706 | orchestrator | 08:29:15.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.680716 | orchestrator | 08:29:15.679 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.680722 | orchestrator | 08:29:15.679 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.680727 | orchestrator | 08:29:15.679 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.680732 | orchestrator | 08:29:15.679 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.680737 | orchestrator | 08:29:15.679 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-09 08:29:15.680742 | orchestrator | 08:29:15.679 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.680747 | orchestrator | 08:29:15.679 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.680753 | orchestrator | 08:29:15.679 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.680776 | orchestrator | 08:29:15.679 STDOUT terraform:  } 2025-04-09 08:29:15.680781 | orchestrator | 08:29:15.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-09 08:29:15.680786 | orchestrator | 08:29:15.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.680791 | orchestrator | 08:29:15.679 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.680796 | orchestrator | 08:29:15.679 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.680801 | orchestrator | 08:29:15.679 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.680808 | orchestrator | 08:29:15.680 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.680813 | orchestrator | 08:29:15.680 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-09 08:29:15.680818 | orchestrator | 08:29:15.680 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.680823 | orchestrator | 08:29:15.680 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.680829 | orchestrator | 08:29:15.680 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.680834 | orchestrator | 08:29:15.680 STDOUT terraform:  } 2025-04-09 08:29:15.680839 | orchestrator | 08:29:15.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-09 08:29:15.680844 | orchestrator | 08:29:15.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.680849 | orchestrator | 08:29:15.680 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.680854 | orchestrator | 08:29:15.680 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.680860 | orchestrator | 08:29:15.680 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.680865 | orchestrator | 08:29:15.680 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.680878 | orchestrator | 08:29:15.680 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-09 08:29:15.680883 | orchestrator | 08:29:15.680 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.680888 | orchestrator | 08:29:15.680 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.680893 | orchestrator | 08:29:15.680 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.680898 | orchestrator | 08:29:15.680 STDOUT terraform:  } 2025-04-09 08:29:15.680909 | orchestrator | 08:29:15.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-09 08:29:15.680914 | orchestrator | 08:29:15.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.680919 | orchestrator | 08:29:15.680 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.680924 | orchestrator | 08:29:15.680 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.680929 | orchestrator | 08:29:15.680 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.680936 | orchestrator | 08:29:15.680 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.682369 | orchestrator | 08:29:15.680 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-09 08:29:15.682403 | orchestrator | 08:29:15.680 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.682408 | orchestrator | 08:29:15.680 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.682414 | orchestrator | 08:29:15.681 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.682420 | orchestrator | 08:29:15.681 STDOUT terraform:  } 2025-04-09 08:29:15.682425 | orchestrator | 08:29:15.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-09 08:29:15.682431 | orchestrator | 08:29:15.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.682436 | orchestrator | 08:29:15.681 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.682441 | orchestrator | 08:29:15.681 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.682447 | orchestrator | 08:29:15.681 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.682457 | orchestrator | 08:29:15.681 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.682462 | orchestrator | 08:29:15.681 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-09 08:29:15.682467 | orchestrator | 08:29:15.681 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.682472 | orchestrator | 08:29:15.681 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.682477 | orchestrator | 08:29:15.681 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.682483 | orchestrator | 08:29:15.681 STDOUT terraform:  } 2025-04-09 08:29:15.682488 | orchestrator | 08:29:15.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-09 08:29:15.682493 | orchestrator | 08:29:15.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.682497 | orchestrator | 08:29:15.681 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.682513 | orchestrator | 08:29:15.681 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.682518 | orchestrator | 08:29:15.681 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.682523 | orchestrator | 08:29:15.681 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.682527 | orchestrator | 08:29:15.681 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-09 08:29:15.682532 | orchestrator | 08:29:15.681 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.682537 | orchestrator | 08:29:15.681 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.682542 | orchestrator | 08:29:15.681 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.682547 | orchestrator | 08:29:15.681 STDOUT terraform:  } 2025-04-09 08:29:15.682552 | orchestrator | 08:29:15.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-09 08:29:15.682557 | orchestrator | 08:29:15.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.682562 | orchestrator | 08:29:15.681 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.682567 | orchestrator | 08:29:15.681 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.682571 | orchestrator | 08:29:15.681 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.682576 | orchestrator | 08:29:15.681 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.682581 | orchestrator | 08:29:15.681 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-09 08:29:15.682587 | orchestrator | 08:29:15.682 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.682593 | orchestrator | 08:29:15.682 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.682598 | orchestrator | 08:29:15.682 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.682603 | orchestrator | 08:29:15.682 STDOUT terraform:  } 2025-04-09 08:29:15.682612 | orchestrator | 08:29:15.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-09 08:29:15.682618 | orchestrator | 08:29:15.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.682623 | orchestrator | 08:29:15.682 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.682631 | orchestrator | 08:29:15.682 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.682636 | orchestrator | 08:29:15.682 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.682641 | orchestrator | 08:29:15.682 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.682646 | orchestrator | 08:29:15.682 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-09 08:29:15.682651 | orchestrator | 08:29:15.682 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.682656 | orchestrator | 08:29:15.682 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.682661 | orchestrator | 08:29:15.682 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.682666 | orchestrator | 08:29:15.682 STDOUT terraform:  } 2025-04-09 08:29:15.682675 | orchestrator | 08:29:15.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-09 08:29:15.682682 | orchestrator | 08:29:15.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.684442 | orchestrator | 08:29:15.682 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.684485 | orchestrator | 08:29:15.682 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.684492 | orchestrator | 08:29:15.682 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.684498 | orchestrator | 08:29:15.682 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.684504 | orchestrator | 08:29:15.682 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-09 08:29:15.684510 | orchestrator | 08:29:15.682 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.684516 | orchestrator | 08:29:15.682 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.684523 | orchestrator | 08:29:15.682 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.684529 | orchestrator | 08:29:15.682 STDOUT terraform:  } 2025-04-09 08:29:15.684535 | orchestrator | 08:29:15.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-09 08:29:15.684541 | orchestrator | 08:29:15.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.684546 | orchestrator | 08:29:15.683 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.684551 | orchestrator | 08:29:15.683 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.684556 | orchestrator | 08:29:15.683 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.684565 | orchestrator | 08:29:15.683 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.684571 | orchestrator | 08:29:15.683 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-09 08:29:15.684576 | orchestrator | 08:29:15.683 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.684581 | orchestrator | 08:29:15.683 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.684587 | orchestrator | 08:29:15.683 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.684592 | orchestrator | 08:29:15.683 STDOUT terraform:  } 2025-04-09 08:29:15.684597 | orchestrator | 08:29:15.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-09 08:29:15.684602 | orchestrator | 08:29:15.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.684607 | orchestrator | 08:29:15.683 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.684612 | orchestrator | 08:29:15.683 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.684617 | orchestrator | 08:29:15.683 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.684622 | orchestrator | 08:29:15.683 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.684627 | orchestrator | 08:29:15.683 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-09 08:29:15.684643 | orchestrator | 08:29:15.683 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.684648 | orchestrator | 08:29:15.683 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.684653 | orchestrator | 08:29:15.683 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.684658 | orchestrator | 08:29:15.683 STDOUT terraform:  } 2025-04-09 08:29:15.684663 | orchestrator | 08:29:15.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-09 08:29:15.684668 | orchestrator | 08:29:15.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.684673 | orchestrator | 08:29:15.683 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.684678 | orchestrator | 08:29:15.683 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.684683 | orchestrator | 08:29:15.683 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.684688 | orchestrator | 08:29:15.683 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.684693 | orchestrator | 08:29:15.683 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-09 08:29:15.684702 | orchestrator | 08:29:15.683 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.685009 | orchestrator | 08:29:15.684 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.685023 | orchestrator | 08:29:15.684 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.685028 | orchestrator | 08:29:15.684 STDOUT terraform:  } 2025-04-09 08:29:15.685034 | orchestrator | 08:29:15.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-09 08:29:15.685039 | orchestrator | 08:29:15.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.685044 | orchestrator | 08:29:15.684 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.685049 | orchestrator | 08:29:15.684 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.685054 | orchestrator | 08:29:15.684 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.685059 | orchestrator | 08:29:15.684 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.685064 | orchestrator | 08:29:15.684 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-09 08:29:15.685069 | orchestrator | 08:29:15.684 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.685074 | orchestrator | 08:29:15.684 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.685083 | orchestrator | 08:29:15.684 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.685088 | orchestrator | 08:29:15.684 STDOUT terraform:  } 2025-04-09 08:29:15.685093 | orchestrator | 08:29:15.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-09 08:29:15.685097 | orchestrator | 08:29:15.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.685103 | orchestrator | 08:29:15.684 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.685108 | orchestrator | 08:29:15.684 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.685120 | orchestrator | 08:29:15.684 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.685125 | orchestrator | 08:29:15.684 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.685130 | orchestrator | 08:29:15.684 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-09 08:29:15.685135 | orchestrator | 08:29:15.684 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.685143 | orchestrator | 08:29:15.684 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.686826 | orchestrator | 08:29:15.684 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.686861 | orchestrator | 08:29:15.684 STDOUT terraform:  } 2025-04-09 08:29:15.686867 | orchestrator | 08:29:15.684 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-09 08:29:15.686873 | orchestrator | 08:29:15.684 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.686878 | orchestrator | 08:29:15.684 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.686883 | orchestrator | 08:29:15.684 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.686888 | orchestrator | 08:29:15.684 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.686896 | orchestrator | 08:29:15.684 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.686902 | orchestrator | 08:29:15.684 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-09 08:29:15.686912 | orchestrator | 08:29:15.684 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.694076 | orchestrator | 08:29:15.685 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.694122 | orchestrator | 08:29:15.685 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.694129 | orchestrator | 08:29:15.692 STDOUT terraform:  } 2025-04-09 08:29:15.694135 | orchestrator | 08:29:15.692 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-09 08:29:15.694141 | orchestrator | 08:29:15.692 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.694146 | orchestrator | 08:29:15.692 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.694151 | orchestrator | 08:29:15.692 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.694156 | orchestrator | 08:29:15.692 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.694161 | orchestrator | 08:29:15.692 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.694167 | orchestrator | 08:29:15.692 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-09 08:29:15.694172 | orchestrator | 08:29:15.692 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.694177 | orchestrator | 08:29:15.692 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.694182 | orchestrator | 08:29:15.692 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.694187 | orchestrator | 08:29:15.692 STDOUT terraform:  } 2025-04-09 08:29:15.694192 | orchestrator | 08:29:15.692 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-09 08:29:15.694208 | orchestrator | 08:29:15.692 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-09 08:29:15.694213 | orchestrator | 08:29:15.692 STDOUT terraform:  + attachment = (known after apply) 2025-04-09 08:29:15.694218 | orchestrator | 08:29:15.692 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.694223 | orchestrator | 08:29:15.692 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.694228 | orchestrator | 08:29:15.692 STDOUT terraform:  + metadata = (known after apply) 2025-04-09 08:29:15.694233 | orchestrator | 08:29:15.692 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-09 08:29:15.694238 | orchestrator | 08:29:15.692 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.694243 | orchestrator | 08:29:15.692 STDOUT terraform:  + size = 20 2025-04-09 08:29:15.694248 | orchestrator | 08:29:15.693 STDOUT terraform:  + volume_type = "ssd" 2025-04-09 08:29:15.694253 | orchestrator | 08:29:15.693 STDOUT terraform:  } 2025-04-09 08:29:15.694260 | orchestrator | 08:29:15.693 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-09 08:29:15.694265 | orchestrator | 08:29:15.693 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-09 08:29:15.694270 | orchestrator | 08:29:15.693 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-09 08:29:15.694275 | orchestrator | 08:29:15.693 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-09 08:29:15.694280 | orchestrator | 08:29:15.693 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-09 08:29:15.694285 | orchestrator | 08:29:15.693 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.694290 | orchestrator | 08:29:15.693 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.694296 | orchestrator | 08:29:15.693 STDOUT terraform:  + config_drive = true 2025-04-09 08:29:15.694301 | orchestrator | 08:29:15.693 STDOUT terraform:  + created = (known after apply) 2025-04-09 08:29:15.694305 | orchestrator | 08:29:15.693 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-09 08:29:15.694310 | orchestrator | 08:29:15.693 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-09 08:29:15.694315 | orchestrator | 08:29:15.693 STDOUT terraform:  + force_delete = false 2025-04-09 08:29:15.694320 | orchestrator | 08:29:15.693 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.694329 | orchestrator | 08:29:15.693 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.695293 | orchestrator | 08:29:15.693 STDOUT terraform:  + image_name = (known after apply) 2025-04-09 08:29:15.695322 | orchestrator | 08:29:15.693 STDOUT terraform:  + key_pair = "testbed" 2025-04-09 08:29:15.695328 | orchestrator | 08:29:15.693 STDOUT terraform:  + name = "testbed-manager" 2025-04-09 08:29:15.695333 | orchestrator | 08:29:15.693 STDOUT terraform:  + power_state = "active" 2025-04-09 08:29:15.695338 | orchestrator | 08:29:15.693 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.695343 | orchestrator | 08:29:15.693 STDOUT terraform:  + security_groups = (known after apply) 2025-04-09 08:29:15.695357 | orchestrator | 08:29:15.693 STDOUT terraform:  + stop_before_destroy = false 2025-04-09 08:29:15.695362 | orchestrator | 08:29:15.693 STDOUT terraform:  + updated = (known after apply) 2025-04-09 08:29:15.695369 | orchestrator | 08:29:15.693 STDOUT terraform:  + user_data = (known after apply) 2025-04-09 08:29:15.695374 | orchestrator | 08:29:15.693 STDOUT terraform:  + block_device { 2025-04-09 08:29:15.695380 | orchestrator | 08:29:15.693 STDOUT terraform:  + boot_index = 0 2025-04-09 08:29:15.695385 | orchestrator | 08:29:15.693 STDOUT terraform:  + delete_on_termination = false 2025-04-09 08:29:15.695390 | orchestrator | 08:29:15.693 STDOUT terraform:  + destination_type = "volume" 2025-04-09 08:29:15.695395 | orchestrator | 08:29:15.693 STDOUT terraform:  + multiattach = false 2025-04-09 08:29:15.695400 | orchestrator | 08:29:15.693 STDOUT terraform:  + source_type = "volume" 2025-04-09 08:29:15.695405 | orchestrator | 08:29:15.693 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.695413 | orchestrator | 08:29:15.693 STDOUT terraform:  } 2025-04-09 08:29:15.695419 | orchestrator | 08:29:15.693 STDOUT terraform:  + network { 2025-04-09 08:29:15.695424 | orchestrator | 08:29:15.693 STDOUT terraform:  + access_network = false 2025-04-09 08:29:15.695429 | orchestrator | 08:29:15.693 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-09 08:29:15.695435 | orchestrator | 08:29:15.693 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-09 08:29:15.695440 | orchestrator | 08:29:15.693 STDOUT terraform:  + mac = (known after apply) 2025-04-09 08:29:15.695445 | orchestrator | 08:29:15.694 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.695450 | orchestrator | 08:29:15.694 STDOUT terraform:  + port = (known after apply) 2025-04-09 08:29:15.695455 | orchestrator | 08:29:15.694 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.695460 | orchestrator | 08:29:15.694 STDOUT terraform:  } 2025-04-09 08:29:15.695465 | orchestrator | 08:29:15.694 STDOUT terraform:  } 2025-04-09 08:29:15.695470 | orchestrator | 08:29:15.694 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-09 08:29:15.695475 | orchestrator | 08:29:15.694 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-09 08:29:15.695480 | orchestrator | 08:29:15.694 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-09 08:29:15.695485 | orchestrator | 08:29:15.694 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-09 08:29:15.695490 | orchestrator | 08:29:15.694 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-09 08:29:15.695495 | orchestrator | 08:29:15.694 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.695505 | orchestrator | 08:29:15.694 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.695511 | orchestrator | 08:29:15.694 STDOUT terraform:  + config_drive = true 2025-04-09 08:29:15.695515 | orchestrator | 08:29:15.694 STDOUT terraform:  + created = (known after apply) 2025-04-09 08:29:15.695524 | orchestrator | 08:29:15.694 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-09 08:29:15.695529 | orchestrator | 08:29:15.694 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-09 08:29:15.695533 | orchestrator | 08:29:15.694 STDOUT terraform:  + force_delete = false 2025-04-09 08:29:15.695538 | orchestrator | 08:29:15.694 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.695543 | orchestrator | 08:29:15.694 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.695548 | orchestrator | 08:29:15.694 STDOUT terraform:  + image_name = (known after apply) 2025-04-09 08:29:15.695553 | orchestrator | 08:29:15.694 STDOUT terraform:  + key_pair = "testbed" 2025-04-09 08:29:15.695558 | orchestrator | 08:29:15.694 STDOUT terraform:  + name = "testbed-node-0" 2025-04-09 08:29:15.695563 | orchestrator | 08:29:15.694 STDOUT terraform:  + power_state = "active" 2025-04-09 08:29:15.695568 | orchestrator | 08:29:15.694 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.695572 | orchestrator | 08:29:15.694 STDOUT terraform:  + security_groups = (known after apply) 2025-04-09 08:29:15.695577 | orchestrator | 08:29:15.694 STDOUT terraform:  + stop_before_destroy = false 2025-04-09 08:29:15.695582 | orchestrator | 08:29:15.694 STDOUT terraform:  + updated = (known after apply) 2025-04-09 08:29:15.695587 | orchestrator | 08:29:15.694 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-09 08:29:15.695592 | orchestrator | 08:29:15.694 STDOUT terraform:  + block_device { 2025-04-09 08:29:15.695597 | orchestrator | 08:29:15.694 STDOUT terraform:  + boot_index = 0 2025-04-09 08:29:15.695623 | orchestrator | 08:29:15.694 STDOUT terraform:  + delete_on_termination = false 2025-04-09 08:29:15.695628 | orchestrator | 08:29:15.694 STDOUT terraform:  + destination_type = "volume" 2025-04-09 08:29:15.695634 | orchestrator | 08:29:15.694 STDOUT terraform:  + multiattach = false 2025-04-09 08:29:15.695639 | orchestrator | 08:29:15.694 STDOUT terraform:  + source_type = "volume" 2025-04-09 08:29:15.695644 | orchestrator | 08:29:15.694 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.695649 | orchestrator | 08:29:15.694 STDOUT terraform:  } 2025-04-09 08:29:15.695654 | orchestrator | 08:29:15.694 STDOUT terraform:  + network { 2025-04-09 08:29:15.695659 | orchestrator | 08:29:15.694 STDOUT terraform:  + access_network = false 2025-04-09 08:29:15.695664 | orchestrator | 08:29:15.695 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-09 08:29:15.695669 | orchestrator | 08:29:15.695 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-09 08:29:15.695674 | orchestrator | 08:29:15.695 STDOUT terraform:  + mac = (known after apply) 2025-04-09 08:29:15.695679 | orchestrator | 08:29:15.695 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.695686 | orchestrator | 08:29:15.695 STDOUT terraform:  + port = (known after apply) 2025-04-09 08:29:15.695691 | orchestrator | 08:29:15.695 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.695699 | orchestrator | 08:29:15.695 STDOUT terraform:  } 2025-04-09 08:29:15.695704 | orchestrator | 08:29:15.695 STDOUT terraform:  } 2025-04-09 08:29:15.695709 | orchestrator | 08:29:15.695 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-09 08:29:15.695714 | orchestrator | 08:29:15.695 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-09 08:29:15.695720 | orchestrator | 08:29:15.695 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-09 08:29:15.695729 | orchestrator | 08:29:15.695 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-09 08:29:15.697366 | orchestrator | 08:29:15.695 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-09 08:29:15.697402 | orchestrator | 08:29:15.695 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.697409 | orchestrator | 08:29:15.695 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.697414 | orchestrator | 08:29:15.695 STDOUT terraform:  + config_drive = true 2025-04-09 08:29:15.697419 | orchestrator | 08:29:15.695 STDOUT terraform:  + created = (known after apply) 2025-04-09 08:29:15.697425 | orchestrator | 08:29:15.695 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-09 08:29:15.697435 | orchestrator | 08:29:15.695 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-09 08:29:15.697440 | orchestrator | 08:29:15.695 STDOUT terraform:  + force_delete = false 2025-04-09 08:29:15.697445 | orchestrator | 08:29:15.695 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.697450 | orchestrator | 08:29:15.695 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.697454 | orchestrator | 08:29:15.695 STDOUT terraform:  + image_name = (known after apply) 2025-04-09 08:29:15.697459 | orchestrator | 08:29:15.695 STDOUT terraform:  + key_pair = "testbed" 2025-04-09 08:29:15.697465 | orchestrator | 08:29:15.695 STDOUT terraform:  + name = "testbed-node-1" 2025-04-09 08:29:15.697470 | orchestrator | 08:29:15.695 STDOUT terraform:  + power_state = "active" 2025-04-09 08:29:15.697475 | orchestrator | 08:29:15.695 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.697487 | orchestrator | 08:29:15.695 STDOUT terraform:  + security_groups = (known after apply) 2025-04-09 08:29:15.697492 | orchestrator | 08:29:15.695 STDOUT terraform:  + stop_before_destroy = false 2025-04-09 08:29:15.697497 | orchestrator | 08:29:15.695 STDOUT terraform:  + updated = (known after apply) 2025-04-09 08:29:15.697502 | orchestrator | 08:29:15.695 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-09 08:29:15.697508 | orchestrator | 08:29:15.695 STDOUT terraform:  + block_device { 2025-04-09 08:29:15.697513 | orchestrator | 08:29:15.695 STDOUT terraform:  + boot_index = 0 2025-04-09 08:29:15.697518 | orchestrator | 08:29:15.695 STDOUT terraform:  + delete_on_termination = false 2025-04-09 08:29:15.697523 | orchestrator | 08:29:15.695 STDOUT terraform:  + destination_type = "volume" 2025-04-09 08:29:15.697528 | orchestrator | 08:29:15.695 STDOUT terraform:  + multiattach = false 2025-04-09 08:29:15.697541 | orchestrator | 08:29:15.695 STDOUT terraform:  + source_type = "volume" 2025-04-09 08:29:15.697546 | orchestrator | 08:29:15.695 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.697551 | orchestrator | 08:29:15.696 STDOUT terraform:  } 2025-04-09 08:29:15.697557 | orchestrator | 08:29:15.696 STDOUT terraform:  + network { 2025-04-09 08:29:15.697562 | orchestrator | 08:29:15.696 STDOUT terraform:  + access_network = false 2025-04-09 08:29:15.697566 | orchestrator | 08:29:15.696 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-09 08:29:15.697571 | orchestrator | 08:29:15.696 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-09 08:29:15.697576 | orchestrator | 08:29:15.696 STDOUT terraform:  + mac = (known after apply) 2025-04-09 08:29:15.697581 | orchestrator | 08:29:15.696 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.697586 | orchestrator | 08:29:15.696 STDOUT terraform:  + port = (known after apply) 2025-04-09 08:29:15.697591 | orchestrator | 08:29:15.696 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.697596 | orchestrator | 08:29:15.696 STDOUT terraform:  } 2025-04-09 08:29:15.697601 | orchestrator | 08:29:15.696 STDOUT terraform:  } 2025-04-09 08:29:15.697605 | orchestrator | 08:29:15.696 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-09 08:29:15.697611 | orchestrator | 08:29:15.696 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-09 08:29:15.697615 | orchestrator | 08:29:15.696 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-09 08:29:15.697620 | orchestrator | 08:29:15.696 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-09 08:29:15.697627 | orchestrator | 08:29:15.696 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-09 08:29:15.697633 | orchestrator | 08:29:15.696 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.697637 | orchestrator | 08:29:15.696 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.697642 | orchestrator | 08:29:15.696 STDOUT terraform:  + config_drive = true 2025-04-09 08:29:15.697647 | orchestrator | 08:29:15.696 STDOUT terraform:  + created = (known after apply) 2025-04-09 08:29:15.697652 | orchestrator | 08:29:15.696 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-09 08:29:15.697657 | orchestrator | 08:29:15.696 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-09 08:29:15.697662 | orchestrator | 08:29:15.696 STDOUT terraform:  + force_delete = false 2025-04-09 08:29:15.697667 | orchestrator | 08:29:15.696 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.697671 | orchestrator | 08:29:15.696 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.697676 | orchestrator | 08:29:15.696 STDOUT terraform:  + image_name = (known after apply) 2025-04-09 08:29:15.697681 | orchestrator | 08:29:15.696 STDOUT terraform:  + key_pair = "testbed" 2025-04-09 08:29:15.697690 | orchestrator | 08:29:15.696 STDOUT terraform:  + name = "testbed-node-2" 2025-04-09 08:29:15.697708 | orchestrator | 08:29:15.696 STDOUT terraform:  + power_state = "active" 2025-04-09 08:29:15.697713 | orchestrator | 08:29:15.696 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.697718 | orchestrator | 08:29:15.696 STDOUT terraform:  + security_groups = (known after apply) 2025-04-09 08:29:15.697723 | orchestrator | 08:29:15.696 STDOUT terraform:  + stop_before_destroy = false 2025-04-09 08:29:15.697728 | orchestrator | 08:29:15.696 STDOUT terraform:  + updated = (known after apply) 2025-04-09 08:29:15.697733 | orchestrator | 08:29:15.696 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-09 08:29:15.697738 | orchestrator | 08:29:15.696 STDOUT terraform:  + block_device { 2025-04-09 08:29:15.697743 | orchestrator | 08:29:15.696 STDOUT terraform:  + boot_index = 0 2025-04-09 08:29:15.697748 | orchestrator | 08:29:15.696 STDOUT terraform:  + delete_on_termination = false 2025-04-09 08:29:15.697752 | orchestrator | 08:29:15.696 STDOUT terraform:  + destination_type = "volume" 2025-04-09 08:29:15.697782 | orchestrator | 08:29:15.696 STDOUT terraform:  + multiattach = false 2025-04-09 08:29:15.697787 | orchestrator | 08:29:15.696 STDOUT terraform:  + source_type = "volume" 2025-04-09 08:29:15.697792 | orchestrator | 08:29:15.697 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.697796 | orchestrator | 08:29:15.697 STDOUT terraform:  } 2025-04-09 08:29:15.697802 | orchestrator | 08:29:15.697 STDOUT terraform:  + network { 2025-04-09 08:29:15.697807 | orchestrator | 08:29:15.697 STDOUT terraform:  + access_network = false 2025-04-09 08:29:15.697811 | orchestrator | 08:29:15.697 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-09 08:29:15.697816 | orchestrator | 08:29:15.697 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-09 08:29:15.697821 | orchestrator | 08:29:15.697 STDOUT terraform:  + mac = (known after apply) 2025-04-09 08:29:15.697826 | orchestrator | 08:29:15.697 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.697831 | orchestrator | 08:29:15.697 STDOUT terraform:  + port = (known after apply) 2025-04-09 08:29:15.697836 | orchestrator | 08:29:15.697 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.697841 | orchestrator | 08:29:15.697 STDOUT terraform:  } 2025-04-09 08:29:15.697846 | orchestrator | 08:29:15.697 STDOUT terraform:  } 2025-04-09 08:29:15.697851 | orchestrator | 08:29:15.697 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-09 08:29:15.697856 | orchestrator | 08:29:15.697 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-09 08:29:15.697861 | orchestrator | 08:29:15.697 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-09 08:29:15.697865 | orchestrator | 08:29:15.697 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-09 08:29:15.697870 | orchestrator | 08:29:15.697 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-09 08:29:15.697875 | orchestrator | 08:29:15.697 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.697886 | orchestrator | 08:29:15.697 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.697891 | orchestrator | 08:29:15.697 STDOUT terraform:  + config_drive = true 2025-04-09 08:29:15.697896 | orchestrator | 08:29:15.697 STDOUT terraform:  + created = (known after apply) 2025-04-09 08:29:15.697901 | orchestrator | 08:29:15.697 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-09 08:29:15.697908 | orchestrator | 08:29:15.697 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-09 08:29:15.697913 | orchestrator | 08:29:15.697 STDOUT terraform:  + force_delete = false 2025-04-09 08:29:15.697918 | orchestrator | 08:29:15.697 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.697927 | orchestrator | 08:29:15.697 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.697963 | orchestrator | 08:29:15.697 STDOUT terraform:  + image_name = (known after apply) 2025-04-09 08:29:15.697970 | orchestrator | 08:29:15.697 STDOUT terraform:  + key_pair = "testbed" 2025-04-09 08:29:15.697975 | orchestrator | 08:29:15.697 STDOUT terraform:  + name = "testbed-node-3" 2025-04-09 08:29:15.697979 | orchestrator | 08:29:15.697 STDOUT terraform:  + power_state = "active" 2025-04-09 08:29:15.697986 | orchestrator | 08:29:15.697 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.697991 | orchestrator | 08:29:15.697 STDOUT terraform:  + security_groups = (known after apply) 2025-04-09 08:29:15.697996 | orchestrator | 08:29:15.697 STDOUT terraform:  + stop_before_destroy = false 2025-04-09 08:29:15.698000 | orchestrator | 08:29:15.697 STDOUT terraform:  + updated = (known after apply) 2025-04-09 08:29:15.698007 | orchestrator | 08:29:15.697 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-09 08:29:15.698028 | orchestrator | 08:29:15.697 STDOUT terraform:  + block_device { 2025-04-09 08:29:15.698034 | orchestrator | 08:29:15.697 STDOUT terraform:  + boot_index = 0 2025-04-09 08:29:15.698041 | orchestrator | 08:29:15.697 STDOUT terraform:  + delete_on_termination = false 2025-04-09 08:29:15.701466 | orchestrator | 08:29:15.698 STDOUT terraform:  + destination_type = "volume" 2025-04-09 08:29:15.701502 | orchestrator | 08:29:15.700 STDOUT terraform:  + multiattach = false 2025-04-09 08:29:15.701509 | orchestrator | 08:29:15.700 STDOUT terraform:  + source_type = "volume" 2025-04-09 08:29:15.701514 | orchestrator | 08:29:15.700 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.701520 | orchestrator | 08:29:15.700 STDOUT terraform:  } 2025-04-09 08:29:15.701525 | orchestrator | 08:29:15.700 STDOUT terraform:  + network { 2025-04-09 08:29:15.701530 | orchestrator | 08:29:15.700 STDOUT terraform:  + access_network = false 2025-04-09 08:29:15.701535 | orchestrator | 08:29:15.700 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-09 08:29:15.701540 | orchestrator | 08:29:15.700 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-09 08:29:15.701545 | orchestrator | 08:29:15.700 STDOUT terraform:  + mac = (known after apply) 2025-04-09 08:29:15.701562 | orchestrator | 08:29:15.700 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.701567 | orchestrator | 08:29:15.700 STDOUT terraform:  + port = (known after apply) 2025-04-09 08:29:15.701572 | orchestrator | 08:29:15.700 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.701577 | orchestrator | 08:29:15.700 STDOUT terraform:  } 2025-04-09 08:29:15.701582 | orchestrator | 08:29:15.701 STDOUT terraform:  } 2025-04-09 08:29:15.701587 | orchestrator | 08:29:15.701 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-09 08:29:15.701592 | orchestrator | 08:29:15.701 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-09 08:29:15.701597 | orchestrator | 08:29:15.701 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-09 08:29:15.701602 | orchestrator | 08:29:15.701 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-09 08:29:15.701608 | orchestrator | 08:29:15.701 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-09 08:29:15.701613 | orchestrator | 08:29:15.701 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.701617 | orchestrator | 08:29:15.701 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.701623 | orchestrator | 08:29:15.701 STDOUT terraform:  + config_drive = true 2025-04-09 08:29:15.701627 | orchestrator | 08:29:15.701 STDOUT terraform:  + created = (known after apply) 2025-04-09 08:29:15.701632 | orchestrator | 08:29:15.701 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-09 08:29:15.701637 | orchestrator | 08:29:15.701 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-09 08:29:15.701642 | orchestrator | 08:29:15.701 STDOUT terraform:  + force_delete = false 2025-04-09 08:29:15.701647 | orchestrator | 08:29:15.701 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.701652 | orchestrator | 08:29:15.701 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.701657 | orchestrator | 08:29:15.701 STDOUT terraform:  + image_name = (known after apply) 2025-04-09 08:29:15.701664 | orchestrator | 08:29:15.701 STDOUT terraform:  + key_pair = "testbed" 2025-04-09 08:29:15.701669 | orchestrator | 08:29:15.701 STDOUT terraform:  + name = "testbed-node-4" 2025-04-09 08:29:15.701674 | orchestrator | 08:29:15.701 STDOUT terraform:  + power_state = "active" 2025-04-09 08:29:15.701679 | orchestrator | 08:29:15.701 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.701684 | orchestrator | 08:29:15.701 STDOUT terraform:  + security_groups = (known after apply) 2025-04-09 08:29:15.701689 | orchestrator | 08:29:15.701 STDOUT terraform:  + stop_before_destroy = false 2025-04-09 08:29:15.701694 | orchestrator | 08:29:15.701 STDOUT terraform:  + updated = (known after apply) 2025-04-09 08:29:15.701701 | orchestrator | 08:29:15.701 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-09 08:29:15.701741 | orchestrator | 08:29:15.701 STDOUT terraform:  + block_device { 2025-04-09 08:29:15.701747 | orchestrator | 08:29:15.701 STDOUT terraform:  + boot_index = 0 2025-04-09 08:29:15.701804 | orchestrator | 08:29:15.701 STDOUT terraform:  + delete_on_termination = false 2025-04-09 08:29:15.701810 | orchestrator | 08:29:15.701 STDOUT terraform:  + destination_type = "volume" 2025-04-09 08:29:15.701817 | orchestrator | 08:29:15.701 STDOUT terraform:  + multiattach = false 2025-04-09 08:29:15.701878 | orchestrator | 08:29:15.701 STDOUT terraform:  + source_type = "volume" 2025-04-09 08:29:15.701886 | orchestrator | 08:29:15.701 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.701893 | orchestrator | 08:29:15.701 STDOUT terraform:  } 2025-04-09 08:29:15.701899 | orchestrator | 08:29:15.701 STDOUT terraform:  + network { 2025-04-09 08:29:15.701924 | orchestrator | 08:29:15.701 STDOUT terraform:  + access_network = false 2025-04-09 08:29:15.701960 | orchestrator | 08:29:15.701 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-09 08:29:15.701980 | orchestrator | 08:29:15.701 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-09 08:29:15.702034 | orchestrator | 08:29:15.701 STDOUT terraform:  + mac = (known after apply) 2025-04-09 08:29:15.704373 | orchestrator | 08:29:15.702 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.704405 | orchestrator | 08:29:15.702 STDOUT terraform:  + port = (known after apply) 2025-04-09 08:29:15.704412 | orchestrator | 08:29:15.702 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.704418 | orchestrator | 08:29:15.702 STDOUT terraform:  } 2025-04-09 08:29:15.704424 | orchestrator | 08:29:15.702 STDOUT terraform:  } 2025-04-09 08:29:15.704430 | orchestrator | 08:29:15.702 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-09 08:29:15.704435 | orchestrator | 08:29:15.702 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-09 08:29:15.704441 | orchestrator | 08:29:15.702 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-09 08:29:15.704447 | orchestrator | 08:29:15.702 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-09 08:29:15.704453 | orchestrator | 08:29:15.702 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-09 08:29:15.704459 | orchestrator | 08:29:15.702 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.704465 | orchestrator | 08:29:15.702 STDOUT terraform:  + availability_zone = "nova" 2025-04-09 08:29:15.704470 | orchestrator | 08:29:15.702 STDOUT terraform:  + config_drive = true 2025-04-09 08:29:15.704490 | orchestrator | 08:29:15.702 STDOUT terraform:  + created = (known after apply) 2025-04-09 08:29:15.704496 | orchestrator | 08:29:15.702 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-09 08:29:15.704508 | orchestrator | 08:29:15.702 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-09 08:29:15.704514 | orchestrator | 08:29:15.702 STDOUT terraform:  + force_delete = false 2025-04-09 08:29:15.704519 | orchestrator | 08:29:15.702 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.704525 | orchestrator | 08:29:15.702 STDOUT terraform:  + image_id = (known after apply) 2025-04-09 08:29:15.704538 | orchestrator | 08:29:15.702 STDOUT terraform:  + image_name = (known after apply) 2025-04-09 08:29:15.704543 | orchestrator | 08:29:15.702 STDOUT terraform:  + key_pair = "testbed" 2025-04-09 08:29:15.704549 | orchestrator | 08:29:15.702 STDOUT terraform:  + name = "testbed-node-5" 2025-04-09 08:29:15.704555 | orchestrator | 08:29:15.702 STDOUT terraform:  + power_state = "active" 2025-04-09 08:29:15.704560 | orchestrator | 08:29:15.702 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.704566 | orchestrator | 08:29:15.702 STDOUT terraform:  + security_groups = (known after apply) 2025-04-09 08:29:15.704571 | orchestrator | 08:29:15.702 STDOUT terraform:  + stop_before_destroy = false 2025-04-09 08:29:15.704577 | orchestrator | 08:29:15.702 STDOUT terraform:  + updated = (known after apply) 2025-04-09 08:29:15.704583 | orchestrator | 08:29:15.702 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-09 08:29:15.704589 | orchestrator | 08:29:15.702 STDOUT terraform:  + block_device { 2025-04-09 08:29:15.704594 | orchestrator | 08:29:15.702 STDOUT terraform:  + boot_index = 0 2025-04-09 08:29:15.704600 | orchestrator | 08:29:15.702 STDOUT terraform:  + delete_on_termination = false 2025-04-09 08:29:15.704605 | orchestrator | 08:29:15.702 STDOUT terraform:  + destination_type = "volume" 2025-04-09 08:29:15.704611 | orchestrator | 08:29:15.702 STDOUT terraform:  + multiattach = false 2025-04-09 08:29:15.704616 | orchestrator | 08:29:15.702 STDOUT terraform:  + source_type = "volume" 2025-04-09 08:29:15.704621 | orchestrator | 08:29:15.702 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.704626 | orchestrator | 08:29:15.702 STDOUT terraform:  } 2025-04-09 08:29:15.704631 | orchestrator | 08:29:15.702 STDOUT terraform:  + network { 2025-04-09 08:29:15.704636 | orchestrator | 08:29:15.703 STDOUT terraform:  + access_network = false 2025-04-09 08:29:15.704647 | orchestrator | 08:29:15.703 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-09 08:29:15.704652 | orchestrator | 08:29:15.703 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-09 08:29:15.704657 | orchestrator | 08:29:15.703 STDOUT terraform:  + mac = (known after apply) 2025-04-09 08:29:15.704662 | orchestrator | 08:29:15.703 STDOUT terraform:  + name = (known after apply) 2025-04-09 08:29:15.704667 | orchestrator | 08:29:15.703 STDOUT terraform:  + port = (known after apply) 2025-04-09 08:29:15.704672 | orchestrator | 08:29:15.703 STDOUT terraform:  + uuid = (known after apply) 2025-04-09 08:29:15.704677 | orchestrator | 08:29:15.703 STDOUT terraform:  } 2025-04-09 08:29:15.704682 | orchestrator | 08:29:15.703 STDOUT terraform:  } 2025-04-09 08:29:15.704689 | orchestrator | 08:29:15.703 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-09 08:29:15.704694 | orchestrator | 08:29:15.703 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-09 08:29:15.704699 | orchestrator | 08:29:15.703 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-09 08:29:15.704704 | orchestrator | 08:29:15.703 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.704712 | orchestrator | 08:29:15.703 STDOUT terraform:  + name = "testbed" 2025-04-09 08:29:15.704717 | orchestrator | 08:29:15.703 STDOUT terraform:  + private_key = (sensitive value) 2025-04-09 08:29:15.704722 | orchestrator | 08:29:15.703 STDOUT terraform:  + public_key = (known after apply) 2025-04-09 08:29:15.704727 | orchestrator | 08:29:15.703 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.704732 | orchestrator | 08:29:15.703 STDOUT terraform:  + user_id = (known after apply) 2025-04-09 08:29:15.704737 | orchestrator | 08:29:15.703 STDOUT terraform:  } 2025-04-09 08:29:15.704742 | orchestrator | 08:29:15.703 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-09 08:29:15.704747 | orchestrator | 08:29:15.703 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.704752 | orchestrator | 08:29:15.703 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.704792 | orchestrator | 08:29:15.703 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.704797 | orchestrator | 08:29:15.703 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.704802 | orchestrator | 08:29:15.703 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.704807 | orchestrator | 08:29:15.703 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.704812 | orchestrator | 08:29:15.703 STDOUT terraform:  } 2025-04-09 08:29:15.704817 | orchestrator | 08:29:15.703 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-09 08:29:15.704822 | orchestrator | 08:29:15.703 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.704827 | orchestrator | 08:29:15.703 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.704832 | orchestrator | 08:29:15.703 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.704837 | orchestrator | 08:29:15.703 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.704842 | orchestrator | 08:29:15.703 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.704849 | orchestrator | 08:29:15.703 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.704854 | orchestrator | 08:29:15.703 STDOUT terraform:  } 2025-04-09 08:29:15.704859 | orchestrator | 08:29:15.703 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-09 08:29:15.704864 | orchestrator | 08:29:15.703 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.704869 | orchestrator | 08:29:15.703 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.704874 | orchestrator | 08:29:15.703 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.704882 | orchestrator | 08:29:15.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.704914 | orchestrator | 08:29:15.704 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.704919 | orchestrator | 08:29:15.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.704928 | orchestrator | 08:29:15.704 STDOUT terraform:  } 2025-04-09 08:29:15.704933 | orchestrator | 08:29:15.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-09 08:29:15.704938 | orchestrator | 08:29:15.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.704943 | orchestrator | 08:29:15.704 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.704948 | orchestrator | 08:29:15.704 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.704953 | orchestrator | 08:29:15.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.704958 | orchestrator | 08:29:15.704 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.704963 | orchestrator | 08:29:15.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.704977 | orchestrator | 08:29:15.704 STDOUT terraform:  } 2025-04-09 08:29:15.704982 | orchestrator | 08:29:15.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-09 08:29:15.704987 | orchestrator | 08:29:15.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.704992 | orchestrator | 08:29:15.704 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.704997 | orchestrator | 08:29:15.704 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.705002 | orchestrator | 08:29:15.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.705007 | orchestrator | 08:29:15.704 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.705012 | orchestrator | 08:29:15.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.705016 | orchestrator | 08:29:15.704 STDOUT terraform:  } 2025-04-09 08:29:15.705021 | orchestrator | 08:29:15.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-09 08:29:15.705027 | orchestrator | 08:29:15.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.705031 | orchestrator | 08:29:15.704 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.705037 | orchestrator | 08:29:15.704 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.705042 | orchestrator | 08:29:15.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.705047 | orchestrator | 08:29:15.704 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.705060 | orchestrator | 08:29:15.704 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.705066 | orchestrator | 08:29:15.704 STDOUT terraform:  } 2025-04-09 08:29:15.705071 | orchestrator | 08:29:15.704 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-09 08:29:15.705080 | orchestrator | 08:29:15.704 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.705101 | orchestrator | 08:29:15.704 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.705107 | orchestrator | 08:29:15.704 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.705112 | orchestrator | 08:29:15.704 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.705122 | orchestrator | 08:29:15.704 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.705127 | orchestrator | 08:29:15.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.705138 | orchestrator | 08:29:15.705 STDOUT terraform:  } 2025-04-09 08:29:15.705145 | orchestrator | 08:29:15.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-09 08:29:15.705869 | orchestrator | 08:29:15.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.705901 | orchestrator | 08:29:15.705 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.706592 | orchestrator | 08:29:15.705 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.706619 | orchestrator | 08:29:15.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.706629 | orchestrator | 08:29:15.705 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.706634 | orchestrator | 08:29:15.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.706640 | orchestrator | 08:29:15.705 STDOUT terraform:  } 2025-04-09 08:29:15.706645 | orchestrator | 08:29:15.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-09 08:29:15.706650 | orchestrator | 08:29:15.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.706655 | orchestrator | 08:29:15.705 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.706660 | orchestrator | 08:29:15.705 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.706665 | orchestrator | 08:29:15.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.706670 | orchestrator | 08:29:15.705 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.706677 | orchestrator | 08:29:15.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.706682 | orchestrator | 08:29:15.705 STDOUT terraform:  } 2025-04-09 08:29:15.706687 | orchestrator | 08:29:15.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-09 08:29:15.706693 | orchestrator | 08:29:15.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.706698 | orchestrator | 08:29:15.705 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.706702 | orchestrator | 08:29:15.705 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.706707 | orchestrator | 08:29:15.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.706712 | orchestrator | 08:29:15.705 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.706717 | orchestrator | 08:29:15.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.706722 | orchestrator | 08:29:15.705 STDOUT terraform:  } 2025-04-09 08:29:15.706731 | orchestrator | 08:29:15.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-09 08:29:15.706735 | orchestrator | 08:29:15.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.706749 | orchestrator | 08:29:15.705 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.706773 | orchestrator | 08:29:15.705 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.706779 | orchestrator | 08:29:15.705 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.706785 | orchestrator | 08:29:15.705 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.706795 | orchestrator | 08:29:15.705 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.707618 | orchestrator | 08:29:15.705 STDOUT terraform:  } 2025-04-09 08:29:15.707650 | orchestrator | 08:29:15.705 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-09 08:29:15.707662 | orchestrator | 08:29:15.705 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.709015 | orchestrator | 08:29:15.707 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.709049 | orchestrator | 08:29:15.707 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709055 | orchestrator | 08:29:15.707 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.709060 | orchestrator | 08:29:15.707 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709066 | orchestrator | 08:29:15.707 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.709071 | orchestrator | 08:29:15.707 STDOUT terraform:  } 2025-04-09 08:29:15.709077 | orchestrator | 08:29:15.707 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-09 08:29:15.709083 | orchestrator | 08:29:15.707 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.709087 | orchestrator | 08:29:15.707 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.709092 | orchestrator | 08:29:15.707 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709098 | orchestrator | 08:29:15.707 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.709102 | orchestrator | 08:29:15.707 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709108 | orchestrator | 08:29:15.707 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.709113 | orchestrator | 08:29:15.707 STDOUT terraform:  } 2025-04-09 08:29:15.709118 | orchestrator | 08:29:15.707 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-09 08:29:15.709130 | orchestrator | 08:29:15.707 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.709135 | orchestrator | 08:29:15.707 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.709152 | orchestrator | 08:29:15.707 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709157 | orchestrator | 08:29:15.707 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.709162 | orchestrator | 08:29:15.707 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709166 | orchestrator | 08:29:15.707 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.709180 | orchestrator | 08:29:15.707 STDOUT terraform:  } 2025-04-09 08:29:15.709186 | orchestrator | 08:29:15.707 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-09 08:29:15.709190 | orchestrator | 08:29:15.707 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.709195 | orchestrator | 08:29:15.707 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.709200 | orchestrator | 08:29:15.707 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709205 | orchestrator | 08:29:15.707 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.709210 | orchestrator | 08:29:15.707 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709215 | orchestrator | 08:29:15.708 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.709223 | orchestrator | 08:29:15.708 STDOUT terraform:  } 2025-04-09 08:29:15.709228 | orchestrator | 08:29:15.708 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-09 08:29:15.709233 | orchestrator | 08:29:15.708 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.709238 | orchestrator | 08:29:15.708 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.709243 | orchestrator | 08:29:15.708 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709248 | orchestrator | 08:29:15.708 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.709253 | orchestrator | 08:29:15.708 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709258 | orchestrator | 08:29:15.708 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.709263 | orchestrator | 08:29:15.708 STDOUT terraform:  } 2025-04-09 08:29:15.709268 | orchestrator | 08:29:15.708 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-09 08:29:15.709273 | orchestrator | 08:29:15.708 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.709278 | orchestrator | 08:29:15.708 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.709283 | orchestrator | 08:29:15.708 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709288 | orchestrator | 08:29:15.708 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.709293 | orchestrator | 08:29:15.708 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709298 | orchestrator | 08:29:15.708 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.709303 | orchestrator | 08:29:15.708 STDOUT terraform:  } 2025-04-09 08:29:15.709307 | orchestrator | 08:29:15.708 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-09 08:29:15.709312 | orchestrator | 08:29:15.708 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-09 08:29:15.709317 | orchestrator | 08:29:15.708 STDOUT terraform:  + device = (known after apply) 2025-04-09 08:29:15.709322 | orchestrator | 08:29:15.708 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709330 | orchestrator | 08:29:15.708 STDOUT terraform:  + instance_id = (known after apply) 2025-04-09 08:29:15.709340 | orchestrator | 08:29:15.708 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709345 | orchestrator | 08:29:15.708 STDOUT terraform:  + volume_id = (known after apply) 2025-04-09 08:29:15.709350 | orchestrator | 08:29:15.709 STDOUT terraform:  } 2025-04-09 08:29:15.709355 | orchestrator | 08:29:15.709 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-09 08:29:15.709360 | orchestrator | 08:29:15.709 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-09 08:29:15.709366 | orchestrator | 08:29:15.709 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-09 08:29:15.709370 | orchestrator | 08:29:15.709 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-09 08:29:15.709375 | orchestrator | 08:29:15.709 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709380 | orchestrator | 08:29:15.709 STDOUT terraform:  + port_id = (known after apply) 2025-04-09 08:29:15.709387 | orchestrator | 08:29:15.709 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709392 | orchestrator | 08:29:15.709 STDOUT terraform:  } 2025-04-09 08:29:15.709397 | orchestrator | 08:29:15.709 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-09 08:29:15.709404 | orchestrator | 08:29:15.709 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-09 08:29:15.709526 | orchestrator | 08:29:15.709 STDOUT terraform:  + address = (known after apply) 2025-04-09 08:29:15.709535 | orchestrator | 08:29:15.709 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.709540 | orchestrator | 08:29:15.709 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-09 08:29:15.709547 | orchestrator | 08:29:15.709 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.709553 | orchestrator | 08:29:15.709 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-09 08:29:15.709558 | orchestrator | 08:29:15.709 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.709562 | orchestrator | 08:29:15.709 STDOUT terraform:  + pool = "public" 2025-04-09 08:29:15.709571 | orchestrator | 08:29:15.709 STDOUT terraform:  + port_id = (known after apply) 2025-04-09 08:29:15.709576 | orchestrator | 08:29:15.709 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.709583 | orchestrator | 08:29:15.709 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.710213 | orchestrator | 08:29:15.709 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.710222 | orchestrator | 08:29:15.709 STDOUT terraform:  } 2025-04-09 08:29:15.710230 | orchestrator | 08:29:15.709 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-09 08:29:15.710259 | orchestrator | 08:29:15.709 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-09 08:29:15.710265 | orchestrator | 08:29:15.709 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.710270 | orchestrator | 08:29:15.709 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.710282 | orchestrator | 08:29:15.709 STDOUT terraform:  + availability_zone_hints = [ 2025-04-09 08:29:15.710287 | orchestrator | 08:29:15.709 STDOUT terraform:  + "nova", 2025-04-09 08:29:15.710292 | orchestrator | 08:29:15.709 STDOUT terraform:  ] 2025-04-09 08:29:15.710298 | orchestrator | 08:29:15.709 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-09 08:29:15.710303 | orchestrator | 08:29:15.709 STDOUT terraform:  + external = (known after apply) 2025-04-09 08:29:15.710307 | orchestrator | 08:29:15.709 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.710312 | orchestrator | 08:29:15.709 STDOUT terraform:  + mtu = (known after apply) 2025-04-09 08:29:15.710317 | orchestrator | 08:29:15.709 STDOUT terraform:  + name = "net-testbed-management" 2025-04-09 08:29:15.710322 | orchestrator | 08:29:15.709 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.710327 | orchestrator | 08:29:15.709 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.710332 | orchestrator | 08:29:15.710 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.710337 | orchestrator | 08:29:15.710 STDOUT terraform:  + shared = (known after apply) 2025-04-09 08:29:15.710341 | orchestrator | 08:29:15.710 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.710346 | orchestrator | 08:29:15.710 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-09 08:29:15.710351 | orchestrator | 08:29:15.710 STDOUT terraform:  + segments (known after apply) 2025-04-09 08:29:15.710356 | orchestrator | 08:29:15.710 STDOUT terraform:  } 2025-04-09 08:29:15.710363 | orchestrator | 08:29:15.710 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-09 08:29:15.710616 | orchestrator | 08:29:15.710 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-09 08:29:15.710660 | orchestrator | 08:29:15.710 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.710694 | orchestrator | 08:29:15.710 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-09 08:29:15.710729 | orchestrator | 08:29:15.710 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-09 08:29:15.710812 | orchestrator | 08:29:15.710 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.710821 | orchestrator | 08:29:15.710 STDOUT terraform:  + device_id = (known after apply) 2025-04-09 08:29:15.710858 | orchestrator | 08:29:15.710 STDOUT terraform:  + device_owner = (known after apply) 2025-04-09 08:29:15.710900 | orchestrator | 08:29:15.710 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-09 08:29:15.710932 | orchestrator | 08:29:15.710 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.710977 | orchestrator | 08:29:15.710 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.711012 | orchestrator | 08:29:15.710 STDOUT terraform:  + mac_address = (known after apply) 2025-04-09 08:29:15.711054 | orchestrator | 08:29:15.711 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.711086 | orchestrator | 08:29:15.711 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.711120 | orchestrator | 08:29:15.711 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.711156 | orchestrator | 08:29:15.711 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.711195 | orchestrator | 08:29:15.711 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-09 08:29:15.711229 | orchestrator | 08:29:15.711 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.711251 | orchestrator | 08:29:15.711 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.711281 | orchestrator | 08:29:15.711 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-09 08:29:15.711296 | orchestrator | 08:29:15.711 STDOUT terraform:  } 2025-04-09 08:29:15.711316 | orchestrator | 08:29:15.711 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.711345 | orchestrator | 08:29:15.711 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-09 08:29:15.711362 | orchestrator | 08:29:15.711 STDOUT terraform:  } 2025-04-09 08:29:15.711386 | orchestrator | 08:29:15.711 STDOUT terraform:  + binding (known after apply) 2025-04-09 08:29:15.711393 | orchestrator | 08:29:15.711 STDOUT terraform:  + fixed_ip { 2025-04-09 08:29:15.711421 | orchestrator | 08:29:15.711 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-09 08:29:15.711450 | orchestrator | 08:29:15.711 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.711457 | orchestrator | 08:29:15.711 STDOUT terraform:  } 2025-04-09 08:29:15.711473 | orchestrator | 08:29:15.711 STDOUT terraform:  } 2025-04-09 08:29:15.711519 | orchestrator | 08:29:15.711 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-04-09 08:29:15.711566 | orchestrator | 08:29:15.711 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-09 08:29:15.711603 | orchestrator | 08:29:15.711 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.711639 | orchestrator | 08:29:15.711 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-09 08:29:15.711673 | orchestrator | 08:29:15.711 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-09 08:29:15.711709 | orchestrator | 08:29:15.711 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.711745 | orchestrator | 08:29:15.711 STDOUT terraform:  + device_id = (known after apply) 2025-04-09 08:29:15.711794 | orchestrator | 08:29:15.711 STDOUT terraform:  + device_owner = (known after apply) 2025-04-09 08:29:15.711828 | orchestrator | 08:29:15.711 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-09 08:29:15.711864 | orchestrator | 08:29:15.711 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.711900 | orchestrator | 08:29:15.711 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.711937 | orchestrator | 08:29:15.711 STDOUT terraform:  + mac_address = (known after apply) 2025-04-09 08:29:15.711972 | orchestrator | 08:29:15.711 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.712009 | orchestrator | 08:29:15.711 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.712043 | orchestrator | 08:29:15.712 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.712081 | orchestrator | 08:29:15.712 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.712119 | orchestrator | 08:29:15.712 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-09 08:29:15.712153 | orchestrator | 08:29:15.712 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.712173 | orchestrator | 08:29:15.712 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.712208 | orchestrator | 08:29:15.712 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-09 08:29:15.712235 | orchestrator | 08:29:15.712 STDOUT terraform:  } 2025-04-09 08:29:15.712242 | orchestrator | 08:29:15.712 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.712261 | orchestrator | 08:29:15.712 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-09 08:29:15.712268 | orchestrator | 08:29:15.712 STDOUT terraform:  } 2025-04-09 08:29:15.712293 | orchestrator | 08:29:15.712 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.712342 | orchestrator | 08:29:15.712 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-09 08:29:15.712369 | orchestrator | 08:29:15.712 STDOUT terraform:  } 2025-04-09 08:29:15.712376 | orchestrator | 08:29:15.712 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.712382 | orchestrator | 08:29:15.712 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-09 08:29:15.712403 | orchestrator | 08:29:15.712 STDOUT terraform:  } 2025-04-09 08:29:15.712415 | orchestrator | 08:29:15.712 STDOUT terraform:  + binding (known after apply) 2025-04-09 08:29:15.713112 | orchestrator | 08:29:15.712 STDOUT terraform:  + fixed_ip { 2025-04-09 08:29:15.713124 | orchestrator | 08:29:15.712 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-09 08:29:15.713144 | orchestrator | 08:29:15.713 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.713152 | orchestrator | 08:29:15.713 STDOUT terraform:  } 2025-04-09 08:29:15.713169 | orchestrator | 08:29:15.713 STDOUT terraform:  } 2025-04-09 08:29:15.713223 | orchestrator | 08:29:15.713 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-09 08:29:15.713271 | orchestrator | 08:29:15.713 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-09 08:29:15.718068 | orchestrator | 08:29:15.713 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.718105 | orchestrator | 08:29:15.714 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-09 08:29:15.718112 | orchestrator | 08:29:15.714 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-09 08:29:15.718124 | orchestrator | 08:29:15.714 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.718130 | orchestrator | 08:29:15.714 STDOUT terraform:  + device_id = (known after apply) 2025-04-09 08:29:15.718135 | orchestrator | 08:29:15.714 STDOUT terraform:  + device_owner = (known after apply) 2025-04-09 08:29:15.718148 | orchestrator | 08:29:15.714 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-09 08:29:15.718153 | orchestrator | 08:29:15.714 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.718158 | orchestrator | 08:29:15.714 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.718163 | orchestrator | 08:29:15.714 STDOUT terraform:  + mac_address = (known after apply) 2025-04-09 08:29:15.718168 | orchestrator | 08:29:15.714 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.718173 | orchestrator | 08:29:15.714 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.718178 | orchestrator | 08:29:15.715 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.718183 | orchestrator | 08:29:15.715 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.718188 | orchestrator | 08:29:15.715 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-09 08:29:15.718193 | orchestrator | 08:29:15.715 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.718198 | orchestrator | 08:29:15.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718203 | orchestrator | 08:29:15.715 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-09 08:29:15.718208 | orchestrator | 08:29:15.715 STDOUT terraform:  } 2025-04-09 08:29:15.718214 | orchestrator | 08:29:15.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718219 | orchestrator | 08:29:15.715 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-09 08:29:15.718224 | orchestrator | 08:29:15.715 STDOUT terraform:  } 2025-04-09 08:29:15.718229 | orchestrator | 08:29:15.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718233 | orchestrator | 08:29:15.715 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-09 08:29:15.718238 | orchestrator | 08:29:15.715 STDOUT terraform:  } 2025-04-09 08:29:15.718243 | orchestrator | 08:29:15.715 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718248 | orchestrator | 08:29:15.715 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-09 08:29:15.718253 | orchestrator | 08:29:15.715 STDOUT terraform:  } 2025-04-09 08:29:15.718258 | orchestrator | 08:29:15.715 STDOUT terraform:  + binding (known after apply) 2025-04-09 08:29:15.718263 | orchestrator | 08:29:15.715 STDOUT terraform:  + fixed_ip { 2025-04-09 08:29:15.718268 | orchestrator | 08:29:15.715 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-09 08:29:15.718273 | orchestrator | 08:29:15.715 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.718278 | orchestrator | 08:29:15.715 STDOUT terraform:  } 2025-04-09 08:29:15.718283 | orchestrator | 08:29:15.715 STDOUT terraform:  } 2025-04-09 08:29:15.718288 | orchestrator | 08:29:15.715 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-09 08:29:15.718293 | orchestrator | 08:29:15.715 STDOUT terraform:  + resource "openstack_ne 2025-04-09 08:29:15.718298 | orchestrator | 08:29:15.715 STDOUT terraform: tworking_port_v2" "node_port_management" { 2025-04-09 08:29:15.718306 | orchestrator | 08:29:15.715 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.718311 | orchestrator | 08:29:15.715 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-09 08:29:15.718324 | orchestrator | 08:29:15.715 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-09 08:29:15.718329 | orchestrator | 08:29:15.715 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.718334 | orchestrator | 08:29:15.715 STDOUT terraform:  + device_id = (known after apply) 2025-04-09 08:29:15.718339 | orchestrator | 08:29:15.715 STDOUT terraform:  + device_owner = (known after apply) 2025-04-09 08:29:15.718344 | orchestrator | 08:29:15.715 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-09 08:29:15.718351 | orchestrator | 08:29:15.715 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.718356 | orchestrator | 08:29:15.715 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.718361 | orchestrator | 08:29:15.715 STDOUT terraform:  + mac_address = (known after apply) 2025-04-09 08:29:15.718366 | orchestrator | 08:29:15.715 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.718371 | orchestrator | 08:29:15.715 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.718375 | orchestrator | 08:29:15.715 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.718383 | orchestrator | 08:29:15.715 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.718388 | orchestrator | 08:29:15.715 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-09 08:29:15.718392 | orchestrator | 08:29:15.716 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.718397 | orchestrator | 08:29:15.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718402 | orchestrator | 08:29:15.716 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-09 08:29:15.718407 | orchestrator | 08:29:15.716 STDOUT terraform:  } 2025-04-09 08:29:15.718412 | orchestrator | 08:29:15.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718417 | orchestrator | 08:29:15.716 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-09 08:29:15.718422 | orchestrator | 08:29:15.716 STDOUT terraform:  } 2025-04-09 08:29:15.718427 | orchestrator | 08:29:15.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718432 | orchestrator | 08:29:15.716 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-09 08:29:15.718436 | orchestrator | 08:29:15.716 STDOUT terraform:  } 2025-04-09 08:29:15.718442 | orchestrator | 08:29:15.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718446 | orchestrator | 08:29:15.716 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-09 08:29:15.718451 | orchestrator | 08:29:15.716 STDOUT terraform:  } 2025-04-09 08:29:15.718456 | orchestrator | 08:29:15.716 STDOUT terraform:  + binding (known after apply) 2025-04-09 08:29:15.718461 | orchestrator | 08:29:15.716 STDOUT terraform:  + fixed_ip { 2025-04-09 08:29:15.718470 | orchestrator | 08:29:15.716 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-09 08:29:15.718475 | orchestrator | 08:29:15.716 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.718480 | orchestrator | 08:29:15.716 STDOUT terraform:  } 2025-04-09 08:29:15.718485 | orchestrator | 08:29:15.716 STDOUT terraform:  } 2025-04-09 08:29:15.718490 | orchestrator | 08:29:15.716 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-09 08:29:15.718495 | orchestrator | 08:29:15.716 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-09 08:29:15.718500 | orchestrator | 08:29:15.716 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.718505 | orchestrator | 08:29:15.716 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-09 08:29:15.718510 | orchestrator | 08:29:15.716 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-09 08:29:15.718515 | orchestrator | 08:29:15.716 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.718522 | orchestrator | 08:29:15.716 STDOUT terraform:  + device_id = (known after apply) 2025-04-09 08:29:15.718527 | orchestrator | 08:29:15.716 STDOUT terraform:  + device_owner = (known after apply) 2025-04-09 08:29:15.718532 | orchestrator | 08:29:15.716 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-09 08:29:15.718537 | orchestrator | 08:29:15.716 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.718542 | orchestrator | 08:29:15.716 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.718547 | orchestrator | 08:29:15.716 STDOUT terraform:  + mac_address = (known after apply) 2025-04-09 08:29:15.718551 | orchestrator | 08:29:15.716 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.718556 | orchestrator | 08:29:15.716 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.718561 | orchestrator | 08:29:15.716 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.718566 | orchestrator | 08:29:15.716 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.718571 | orchestrator | 08:29:15.716 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-09 08:29:15.718576 | orchestrator | 08:29:15.716 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.718581 | orchestrator | 08:29:15.716 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718586 | orchestrator | 08:29:15.716 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-09 08:29:15.718591 | orchestrator | 08:29:15.716 STDOUT terraform:  } 2025-04-09 08:29:15.718596 | orchestrator | 08:29:15.717 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718603 | orchestrator | 08:29:15.717 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-09 08:29:15.718608 | orchestrator | 08:29:15.717 STDOUT terraform:  } 2025-04-09 08:29:15.718613 | orchestrator | 08:29:15.717 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718618 | orchestrator | 08:29:15.717 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-09 08:29:15.718626 | orchestrator | 08:29:15.717 STDOUT terraform:  } 2025-04-09 08:29:15.718631 | orchestrator | 08:29:15.717 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718636 | orchestrator | 08:29:15.717 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-09 08:29:15.718640 | orchestrator | 08:29:15.717 STDOUT terraform:  } 2025-04-09 08:29:15.718645 | orchestrator | 08:29:15.717 STDOUT terraform:  + binding (known after apply) 2025-04-09 08:29:15.718650 | orchestrator | 08:29:15.717 STDOUT terraform:  + fixed_ip { 2025-04-09 08:29:15.718655 | orchestrator | 08:29:15.717 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-09 08:29:15.718660 | orchestrator | 08:29:15.717 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.718665 | orchestrator | 08:29:15.717 STDOUT terraform:  } 2025-04-09 08:29:15.718670 | orchestrator | 08:29:15.717 STDOUT terraform:  } 2025-04-09 08:29:15.718675 | orchestrator | 08:29:15.717 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-09 08:29:15.718680 | orchestrator | 08:29:15.717 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-09 08:29:15.718685 | orchestrator | 08:29:15.717 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.718690 | orchestrator | 08:29:15.717 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-09 08:29:15.718694 | orchestrator | 08:29:15.717 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-09 08:29:15.718699 | orchestrator | 08:29:15.717 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.718704 | orchestrator | 08:29:15.717 STDOUT terraform:  + device_id = (known after apply) 2025-04-09 08:29:15.718709 | orchestrator | 08:29:15.717 STDOUT terraform:  + device_owner = (known after apply) 2025-04-09 08:29:15.718714 | orchestrator | 08:29:15.717 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-09 08:29:15.718735 | orchestrator | 08:29:15.717 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.718741 | orchestrator | 08:29:15.717 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.718746 | orchestrator | 08:29:15.717 STDOUT terraform:  + mac_address = (known after apply) 2025-04-09 08:29:15.718751 | orchestrator | 08:29:15.717 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.718774 | orchestrator | 08:29:15.717 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.718780 | orchestrator | 08:29:15.717 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.718784 | orchestrator | 08:29:15.717 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.718789 | orchestrator | 08:29:15.717 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-09 08:29:15.718794 | orchestrator | 08:29:15.717 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.718799 | orchestrator | 08:29:15.717 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718804 | orchestrator | 08:29:15.717 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-09 08:29:15.718819 | orchestrator | 08:29:15.717 STDOUT terraform:  } 2025-04-09 08:29:15.718824 | orchestrator | 08:29:15.717 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718829 | orchestrator | 08:29:15.717 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-09 08:29:15.718834 | orchestrator | 08:29:15.717 STDOUT terraform:  } 2025-04-09 08:29:15.718839 | orchestrator | 08:29:15.717 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718844 | orchestrator | 08:29:15.717 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-09 08:29:15.718849 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.718854 | orchestrator | 08:29:15.718 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.718859 | orchestrator | 08:29:15.718 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-09 08:29:15.718864 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.718869 | orchestrator | 08:29:15.718 STDOUT terraform:  + binding (known after apply) 2025-04-09 08:29:15.718874 | orchestrator | 08:29:15.718 STDOUT terraform:  + fixed_ip { 2025-04-09 08:29:15.718879 | orchestrator | 08:29:15.718 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-09 08:29:15.718884 | orchestrator | 08:29:15.718 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.718888 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.718893 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.718898 | orchestrator | 08:29:15.718 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-09 08:29:15.718903 | orchestrator | 08:29:15.718 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-09 08:29:15.718909 | orchestrator | 08:29:15.718 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.718913 | orchestrator | 08:29:15.718 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-09 08:29:15.718918 | orchestrator | 08:29:15.718 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-09 08:29:15.718923 | orchestrator | 08:29:15.718 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.718928 | orchestrator | 08:29:15.718 STDOUT terraform:  + device_id = (known after apply) 2025-04-09 08:29:15.718933 | orchestrator | 08:29:15.718 STDOUT terraform:  + device_owner = (known after apply) 2025-04-09 08:29:15.718938 | orchestrator | 08:29:15.718 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-09 08:29:15.718944 | orchestrator | 08:29:15.718 STDOUT terraform:  + dns_name = (known after apply) 2025-04-09 08:29:15.718949 | orchestrator | 08:29:15.718 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.718957 | orchestrator | 08:29:15.718 STDOUT terraform:  + mac_address = (known after apply) 2025-04-09 08:29:15.718983 | orchestrator | 08:29:15.718 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.718989 | orchestrator | 08:29:15.718 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-09 08:29:15.718994 | orchestrator | 08:29:15.718 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-09 08:29:15.719002 | orchestrator | 08:29:15.718 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.719007 | orchestrator | 08:29:15.718 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-09 08:29:15.719012 | orchestrator | 08:29:15.718 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.719016 | orchestrator | 08:29:15.718 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.719026 | orchestrator | 08:29:15.718 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-09 08:29:15.719031 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.719036 | orchestrator | 08:29:15.718 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.719041 | orchestrator | 08:29:15.718 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-09 08:29:15.719046 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.719051 | orchestrator | 08:29:15.718 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.719056 | orchestrator | 08:29:15.718 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-09 08:29:15.719061 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.719066 | orchestrator | 08:29:15.718 STDOUT terraform:  + allowed_address_pairs { 2025-04-09 08:29:15.719070 | orchestrator | 08:29:15.718 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-09 08:29:15.719075 | orchestrator | 08:29:15.718 STDOUT terraform:  } 2025-04-09 08:29:15.719082 | orchestrator | 08:29:15.718 STDOUT terraform:  + binding (known after apply) 2025-04-09 08:29:15.719103 | orchestrator | 08:29:15.718 STDOUT terraform:  + fixed_ip { 2025-04-09 08:29:15.719111 | orchestrator | 08:29:15.718 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-09 08:29:15.719116 | orchestrator | 08:29:15.719 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.719121 | orchestrator | 08:29:15.719 STDOUT terraform:  } 2025-04-09 08:29:15.719126 | orchestrator | 08:29:15.719 STDOUT terraform:  } 2025-04-09 08:29:15.719132 | orchestrator | 08:29:15.719 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-09 08:29:15.719151 | orchestrator | 08:29:15.719 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-09 08:29:15.719171 | orchestrator | 08:29:15.719 STDOUT terraform:  + force_destroy = false 2025-04-09 08:29:15.719201 | orchestrator | 08:29:15.719 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.719230 | orchestrator | 08:29:15.719 STDOUT terraform:  + port_id = (known after apply) 2025-04-09 08:29:15.719258 | orchestrator | 08:29:15.719 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.719287 | orchestrator | 08:29:15.719 STDOUT terraform:  + router_id = (known after apply) 2025-04-09 08:29:15.719315 | orchestrator | 08:29:15.719 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-09 08:29:15.719322 | orchestrator | 08:29:15.719 STDOUT terraform:  } 2025-04-09 08:29:15.719360 | orchestrator | 08:29:15.719 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-09 08:29:15.719395 | orchestrator | 08:29:15.719 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-09 08:29:15.719432 | orchestrator | 08:29:15.719 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-09 08:29:15.719468 | orchestrator | 08:29:15.719 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.719490 | orchestrator | 08:29:15.719 STDOUT terraform:  + availability_zone_hints = [ 2025-04-09 08:29:15.719505 | orchestrator | 08:29:15.719 STDOUT terraform:  + "nova", 2025-04-09 08:29:15.719512 | orchestrator | 08:29:15.719 STDOUT terraform:  ] 2025-04-09 08:29:15.719552 | orchestrator | 08:29:15.719 STDOUT terraform:  + distributed = (known after apply) 2025-04-09 08:29:15.719588 | orchestrator | 08:29:15.719 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-09 08:29:15.719639 | orchestrator | 08:29:15.719 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-09 08:29:15.719674 | orchestrator | 08:29:15.719 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.719703 | orchestrator | 08:29:15.719 STDOUT terraform:  + name = "testbed" 2025-04-09 08:29:15.719740 | orchestrator | 08:29:15.719 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.719787 | orchestrator | 08:29:15.719 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.719817 | orchestrator | 08:29:15.719 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-09 08:29:15.719824 | orchestrator | 08:29:15.719 STDOUT terraform:  } 2025-04-09 08:29:15.727285 | orchestrator | 08:29:15.719 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-09 08:29:15.727322 | orchestrator | 08:29:15.720 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-09 08:29:15.727328 | orchestrator | 08:29:15.720 STDOUT terraform:  + description = "ssh" 2025-04-09 08:29:15.727335 | orchestrator | 08:29:15.720 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727341 | orchestrator | 08:29:15.720 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727346 | orchestrator | 08:29:15.720 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727351 | orchestrator | 08:29:15.720 STDOUT terraform:  + port_range_max = 22 2025-04-09 08:29:15.727357 | orchestrator | 08:29:15.720 STDOUT terraform:  + port_range_min = 22 2025-04-09 08:29:15.727362 | orchestrator | 08:29:15.720 STDOUT terraform:  + protocol = "tcp" 2025-04-09 08:29:15.727367 | orchestrator | 08:29:15.720 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727372 | orchestrator | 08:29:15.720 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727377 | orchestrator | 08:29:15.720 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-09 08:29:15.727382 | orchestrator | 08:29:15.720 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727387 | orchestrator | 08:29:15.720 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727393 | orchestrator | 08:29:15.720 STDOUT terraform:  } 2025-04-09 08:29:15.727412 | orchestrator | 08:29:15.720 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-09 08:29:15.727418 | orchestrator | 08:29:15.720 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-09 08:29:15.727423 | orchestrator | 08:29:15.720 STDOUT terraform:  + description = "wireguard" 2025-04-09 08:29:15.727427 | orchestrator | 08:29:15.720 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727432 | orchestrator | 08:29:15.720 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727437 | orchestrator | 08:29:15.720 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727442 | orchestrator | 08:29:15.720 STDOUT terraform:  + port_range_max = 51820 2025-04-09 08:29:15.727447 | orchestrator | 08:29:15.720 STDOUT terraform:  + port_range_min = 51820 2025-04-09 08:29:15.727452 | orchestrator | 08:29:15.720 STDOUT terraform:  + protocol = "udp" 2025-04-09 08:29:15.727457 | orchestrator | 08:29:15.720 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727462 | orchestrator | 08:29:15.720 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727467 | orchestrator | 08:29:15.720 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-09 08:29:15.727472 | orchestrator | 08:29:15.720 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727477 | orchestrator | 08:29:15.720 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727483 | orchestrator | 08:29:15.721 STDOUT terraform:  } 2025-04-09 08:29:15.727488 | orchestrator | 08:29:15.721 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-09 08:29:15.727493 | orchestrator | 08:29:15.721 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-09 08:29:15.727498 | orchestrator | 08:29:15.721 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727503 | orchestrator | 08:29:15.721 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727508 | orchestrator | 08:29:15.721 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727513 | orchestrator | 08:29:15.721 STDOUT terraform:  + protocol = "tcp" 2025-04-09 08:29:15.727523 | orchestrator | 08:29:15.721 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727529 | orchestrator | 08:29:15.721 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727534 | orchestrator | 08:29:15.721 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-09 08:29:15.727538 | orchestrator | 08:29:15.721 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727543 | orchestrator | 08:29:15.721 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727548 | orchestrator | 08:29:15.725 STDOUT terraform:  } 2025-04-09 08:29:15.727553 | orchestrator | 08:29:15.725 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-09 08:29:15.727558 | orchestrator | 08:29:15.725 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-09 08:29:15.727568 | orchestrator | 08:29:15.725 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727572 | orchestrator | 08:29:15.725 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727578 | orchestrator | 08:29:15.725 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727588 | orchestrator | 08:29:15.725 STDOUT terraform:  + protocol = "udp" 2025-04-09 08:29:15.727596 | orchestrator | 08:29:15.725 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727601 | orchestrator | 08:29:15.725 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727606 | orchestrator | 08:29:15.725 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-09 08:29:15.727611 | orchestrator | 08:29:15.725 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727616 | orchestrator | 08:29:15.725 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727621 | orchestrator | 08:29:15.725 STDOUT terraform:  } 2025-04-09 08:29:15.727626 | orchestrator | 08:29:15.725 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-09 08:29:15.727631 | orchestrator | 08:29:15.725 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-09 08:29:15.727639 | orchestrator | 08:29:15.725 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727644 | orchestrator | 08:29:15.726 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727648 | orchestrator | 08:29:15.726 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727653 | orchestrator | 08:29:15.726 STDOUT terraform:  + protocol = "icmp" 2025-04-09 08:29:15.727658 | orchestrator | 08:29:15.726 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727663 | orchestrator | 08:29:15.726 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727668 | orchestrator | 08:29:15.726 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-09 08:29:15.727673 | orchestrator | 08:29:15.726 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727678 | orchestrator | 08:29:15.726 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727683 | orchestrator | 08:29:15.726 STDOUT terraform:  } 2025-04-09 08:29:15.727688 | orchestrator | 08:29:15.726 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-09 08:29:15.727693 | orchestrator | 08:29:15.726 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-09 08:29:15.727698 | orchestrator | 08:29:15.726 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727703 | orchestrator | 08:29:15.726 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727708 | orchestrator | 08:29:15.726 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727713 | orchestrator | 08:29:15.726 STDOUT terraform:  + protocol = "tcp" 2025-04-09 08:29:15.727718 | orchestrator | 08:29:15.726 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727729 | orchestrator | 08:29:15.726 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727734 | orchestrator | 08:29:15.726 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-09 08:29:15.727739 | orchestrator | 08:29:15.726 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727743 | orchestrator | 08:29:15.726 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727748 | orchestrator | 08:29:15.726 STDOUT terraform:  } 2025-04-09 08:29:15.727753 | orchestrator | 08:29:15.726 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-09 08:29:15.727799 | orchestrator | 08:29:15.726 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-09 08:29:15.727804 | orchestrator | 08:29:15.726 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727809 | orchestrator | 08:29:15.726 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727814 | orchestrator | 08:29:15.726 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727819 | orchestrator | 08:29:15.726 STDOUT terraform:  + protocol = "udp" 2025-04-09 08:29:15.727824 | orchestrator | 08:29:15.726 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727828 | orchestrator | 08:29:15.726 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727833 | orchestrator | 08:29:15.726 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-09 08:29:15.727838 | orchestrator | 08:29:15.726 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727843 | orchestrator | 08:29:15.726 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727848 | orchestrator | 08:29:15.726 STDOUT terraform:  } 2025-04-09 08:29:15.727853 | orchestrator | 08:29:15.726 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-09 08:29:15.727858 | orchestrator | 08:29:15.727 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-09 08:29:15.727863 | orchestrator | 08:29:15.727 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727868 | orchestrator | 08:29:15.727 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727872 | orchestrator | 08:29:15.727 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727877 | orchestrator | 08:29:15.727 STDOUT terraform:  + protocol = "icmp" 2025-04-09 08:29:15.727882 | orchestrator | 08:29:15.727 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727887 | orchestrator | 08:29:15.727 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.727892 | orchestrator | 08:29:15.727 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-09 08:29:15.727896 | orchestrator | 08:29:15.727 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.727901 | orchestrator | 08:29:15.727 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.727906 | orchestrator | 08:29:15.727 STDOUT terraform:  } 2025-04-09 08:29:15.727911 | orchestrator | 08:29:15.727 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-09 08:29:15.727920 | orchestrator | 08:29:15.727 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-09 08:29:15.727925 | orchestrator | 08:29:15.727 STDOUT terraform:  + description = "vrrp" 2025-04-09 08:29:15.727930 | orchestrator | 08:29:15.727 STDOUT terraform:  + direction = "ingress" 2025-04-09 08:29:15.727935 | orchestrator | 08:29:15.727 STDOUT terraform:  + ethertype = "IPv4" 2025-04-09 08:29:15.727939 | orchestrator | 08:29:15.727 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.727944 | orchestrator | 08:29:15.727 STDOUT terraform:  + protocol = "112" 2025-04-09 08:29:15.727949 | orchestrator | 08:29:15.727 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.727959 | orchestrator | 08:29:15.727 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-09 08:29:15.728069 | orchestrator | 08:29:15.727 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-09 08:29:15.728076 | orchestrator | 08:29:15.727 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-09 08:29:15.728084 | orchestrator | 08:29:15.727 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.728089 | orchestrator | 08:29:15.727 STDOUT terraform:  } 2025-04-09 08:29:15.728094 | orchestrator | 08:29:15.727 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-09 08:29:15.728099 | orchestrator | 08:29:15.727 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-09 08:29:15.728104 | orchestrator | 08:29:15.727 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.728109 | orchestrator | 08:29:15.727 STDOUT terraform:  + description = "management security group" 2025-04-09 08:29:15.728114 | orchestrator | 08:29:15.727 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.728119 | orchestrator | 08:29:15.727 STDOUT terraform:  + name = "testbed-management" 2025-04-09 08:29:15.728125 | orchestrator | 08:29:15.727 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.728220 | orchestrator | 08:29:15.727 STDOUT terraform:  + stateful = (known after apply) 2025-04-09 08:29:15.728230 | orchestrator | 08:29:15.728 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.728235 | orchestrator | 08:29:15.728 STDOUT terraform:  } 2025-04-09 08:29:15.728242 | orchestrator | 08:29:15.728 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-09 08:29:15.729993 | orchestrator | 08:29:15.728 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-09 08:29:15.737103 | orchestrator | 08:29:15.729 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.737143 | orchestrator | 08:29:15.734 STDOUT terraform:  + description = "node security group" 2025-04-09 08:29:15.737149 | orchestrator | 08:29:15.734 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.737155 | orchestrator | 08:29:15.734 STDOUT terraform:  + name = "testbed-node" 2025-04-09 08:29:15.737161 | orchestrator | 08:29:15.734 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.737176 | orchestrator | 08:29:15.734 STDOUT terraform:  + stateful = (known after apply) 2025-04-09 08:29:15.737181 | orchestrator | 08:29:15.734 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.737186 | orchestrator | 08:29:15.734 STDOUT terraform:  } 2025-04-09 08:29:15.737191 | orchestrator | 08:29:15.734 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-09 08:29:15.737197 | orchestrator | 08:29:15.734 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-09 08:29:15.737203 | orchestrator | 08:29:15.734 STDOUT terraform:  + all_tags = (known after apply) 2025-04-09 08:29:15.737208 | orchestrator | 08:29:15.734 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-09 08:29:15.737213 | orchestrator | 08:29:15.734 STDOUT terraform:  + dns_nameservers = [ 2025-04-09 08:29:15.737218 | orchestrator | 08:29:15.734 STDOUT terraform:  + "8.8.8.8", 2025-04-09 08:29:15.737223 | orchestrator | 08:29:15.734 STDOUT terraform:  + "9.9.9.9", 2025-04-09 08:29:15.737228 | orchestrator | 08:29:15.734 STDOUT terraform:  ] 2025-04-09 08:29:15.737234 | orchestrator | 08:29:15.734 STDOUT terraform:  + enable_dhcp = true 2025-04-09 08:29:15.737238 | orchestrator | 08:29:15.734 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-09 08:29:15.737244 | orchestrator | 08:29:15.734 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.737249 | orchestrator | 08:29:15.734 STDOUT terraform:  + ip_version = 4 2025-04-09 08:29:15.737254 | orchestrator | 08:29:15.734 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-09 08:29:15.737259 | orchestrator | 08:29:15.734 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-09 08:29:15.737264 | orchestrator | 08:29:15.734 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-09 08:29:15.737269 | orchestrator | 08:29:15.734 STDOUT terraform:  + network_id = (known after apply) 2025-04-09 08:29:15.737273 | orchestrator | 08:29:15.734 STDOUT terraform:  + no_gateway = false 2025-04-09 08:29:15.737278 | orchestrator | 08:29:15.734 STDOUT terraform:  + region = (known after apply) 2025-04-09 08:29:15.737283 | orchestrator | 08:29:15.734 STDOUT terraform:  + service_types = (known after apply) 2025-04-09 08:29:15.737288 | orchestrator | 08:29:15.734 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-09 08:29:15.737293 | orchestrator | 08:29:15.734 STDOUT terraform:  + allocation_pool { 2025-04-09 08:29:15.737298 | orchestrator | 08:29:15.734 STDOUT terraform:  + end = "192.168.31.250" 2025-04-09 08:29:15.737303 | orchestrator | 08:29:15.734 STDOUT terraform:  + start = "192.168.31.200" 2025-04-09 08:29:15.737308 | orchestrator | 08:29:15.734 STDOUT terraform:  } 2025-04-09 08:29:15.737313 | orchestrator | 08:29:15.734 STDOUT terraform:  } 2025-04-09 08:29:15.737318 | orchestrator | 08:29:15.734 STDOUT terraform:  # terraform_data.image will be created 2025-04-09 08:29:15.737322 | orchestrator | 08:29:15.734 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-09 08:29:15.737327 | orchestrator | 08:29:15.734 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.737335 | orchestrator | 08:29:15.734 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-09 08:29:15.737340 | orchestrator | 08:29:15.734 STDOUT terraform:  + output = (known after apply) 2025-04-09 08:29:15.737345 | orchestrator | 08:29:15.734 STDOUT terraform:  } 2025-04-09 08:29:15.737350 | orchestrator | 08:29:15.734 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-09 08:29:15.737360 | orchestrator | 08:29:15.735 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-09 08:29:15.874531 | orchestrator | 08:29:15.735 STDOUT terraform:  + id = (known after apply) 2025-04-09 08:29:15.874599 | orchestrator | 08:29:15.735 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-09 08:29:15.874607 | orchestrator | 08:29:15.735 STDOUT terraform:  + output = (known after apply) 2025-04-09 08:29:15.874613 | orchestrator | 08:29:15.735 STDOUT terraform:  } 2025-04-09 08:29:15.874622 | orchestrator | 08:29:15.735 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-09 08:29:15.874628 | orchestrator | 08:29:15.735 STDOUT terraform: Changes to Outputs: 2025-04-09 08:29:15.874634 | orchestrator | 08:29:15.735 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-09 08:29:15.874639 | orchestrator | 08:29:15.735 STDOUT terraform:  + private_key = (sensitive value) 2025-04-09 08:29:15.874655 | orchestrator | 08:29:15.873 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-09 08:29:15.874718 | orchestrator | 08:29:15.874 STDOUT terraform: terraform_data.image: Creating... 2025-04-09 08:29:15.874728 | orchestrator | 08:29:15.874 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=a2cb720d-baee-af5d-aaa8-0168eef8378b] 2025-04-09 08:29:15.875124 | orchestrator | 08:29:15.875 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=874c9bee-15e0-1a80-e3ac-9972e4b24aa8] 2025-04-09 08:29:15.884562 | orchestrator | 08:29:15.884 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-09 08:29:15.886601 | orchestrator | 08:29:15.886 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-09 08:29:15.892360 | orchestrator | 08:29:15.888 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-09 08:29:15.892724 | orchestrator | 08:29:15.889 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-09 08:29:15.892745 | orchestrator | 08:29:15.889 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-09 08:29:15.892751 | orchestrator | 08:29:15.891 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-09 08:29:15.892787 | orchestrator | 08:29:15.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-09 08:29:15.892928 | orchestrator | 08:29:15.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-09 08:29:15.898688 | orchestrator | 08:29:15.898 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-09 08:29:15.899634 | orchestrator | 08:29:15.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-09 08:29:16.343036 | orchestrator | 08:29:16.342 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-09 08:29:16.347269 | orchestrator | 08:29:16.347 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-09 08:29:16.494700 | orchestrator | 08:29:16.494 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-04-09 08:29:16.498168 | orchestrator | 08:29:16.498 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-09 08:29:16.563383 | orchestrator | 08:29:16.563 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-09 08:29:16.567434 | orchestrator | 08:29:16.567 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-09 08:29:21.990617 | orchestrator | 08:29:21.990 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=e8351d29-1e19-4167-b2c3-1d897ea78af3] 2025-04-09 08:29:21.997498 | orchestrator | 08:29:21.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-09 08:29:25.890061 | orchestrator | 08:29:25.889 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-09 08:29:25.893183 | orchestrator | 08:29:25.892 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-09 08:29:25.894306 | orchestrator | 08:29:25.894 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-09 08:29:25.898561 | orchestrator | 08:29:25.898 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-09 08:29:25.900074 | orchestrator | 08:29:25.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-09 08:29:25.902079 | orchestrator | 08:29:25.899 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-09 08:29:25.902126 | orchestrator | 08:29:25.901 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-09 08:29:26.349012 | orchestrator | 08:29:26.348 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-09 08:29:26.487375 | orchestrator | 08:29:26.487 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=91fc80ef-66f5-4e95-8e23-15913c9566bc] 2025-04-09 08:29:26.493899 | orchestrator | 08:29:26.493 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-09 08:29:26.512741 | orchestrator | 08:29:26.512 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=20a266e5-e4ad-4a38-8cc4-79e311575ecc] 2025-04-09 08:29:26.520456 | orchestrator | 08:29:26.520 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-09 08:29:26.537781 | orchestrator | 08:29:26.537 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 11s [id=d077077e-8074-4e28-961e-4d10ae0af6bd] 2025-04-09 08:29:26.543163 | orchestrator | 08:29:26.542 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=c3189fe9-451f-4fc6-9bec-4c0706cd3177] 2025-04-09 08:29:26.545053 | orchestrator | 08:29:26.544 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-09 08:29:26.548722 | orchestrator | 08:29:26.548 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-09 08:29:26.565266 | orchestrator | 08:29:26.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=80ef2f8b-b45e-4bed-a63c-5dbd52e64749] 2025-04-09 08:29:26.566009 | orchestrator | 08:29:26.565 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=a877ba24-0f29-49c6-83ab-71ec08700986] 2025-04-09 08:29:26.567600 | orchestrator | 08:29:26.567 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-09 08:29:26.574088 | orchestrator | 08:29:26.570 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=f700b5a9-7ff5-4b8f-bc97-a162213f2234] 2025-04-09 08:29:26.578153 | orchestrator | 08:29:26.577 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-09 08:29:26.578216 | orchestrator | 08:29:26.578 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-09 08:29:26.580181 | orchestrator | 08:29:26.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-09 08:29:26.650059 | orchestrator | 08:29:26.649 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=7a5907a6-1dd9-4bc4-a636-241a5ff50248] 2025-04-09 08:29:26.655278 | orchestrator | 08:29:26.655 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-09 08:29:26.735523 | orchestrator | 08:29:26.735 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=909b970e-d6d0-4275-870f-4a8bf151615a] 2025-04-09 08:29:26.745632 | orchestrator | 08:29:26.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-09 08:29:32.000445 | orchestrator | 08:29:32.000 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-09 08:29:32.196326 | orchestrator | 08:29:32.195 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=67027c85-489c-4f7b-b006-8e76f82b0117] 2025-04-09 08:29:32.203892 | orchestrator | 08:29:32.203 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-09 08:29:36.494576 | orchestrator | 08:29:36.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-09 08:29:36.521161 | orchestrator | 08:29:36.520 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-09 08:29:36.545261 | orchestrator | 08:29:36.545 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-09 08:29:36.549517 | orchestrator | 08:29:36.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-09 08:29:36.578898 | orchestrator | 08:29:36.578 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-09 08:29:36.579176 | orchestrator | 08:29:36.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-09 08:29:36.581213 | orchestrator | 08:29:36.581 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-09 08:29:36.656526 | orchestrator | 08:29:36.656 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-09 08:29:36.695434 | orchestrator | 08:29:36.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=4c16418f-b1ea-409a-91f3-2a744e80e58e] 2025-04-09 08:29:36.701939 | orchestrator | 08:29:36.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=9bf8d632-1c6b-449d-9785-1633e2bfb44c] 2025-04-09 08:29:36.712318 | orchestrator | 08:29:36.712 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-09 08:29:36.713456 | orchestrator | 08:29:36.713 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-09 08:29:36.746671 | orchestrator | 08:29:36.746 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-09 08:29:36.747547 | orchestrator | 08:29:36.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=c4741289-30df-4db6-9178-491638aa0447] 2025-04-09 08:29:36.754397 | orchestrator | 08:29:36.754 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-09 08:29:36.804032 | orchestrator | 08:29:36.803 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=0a277efc-ca83-41bc-9a13-7ec21996cbcf] 2025-04-09 08:29:36.805410 | orchestrator | 08:29:36.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=b3569f59-7c0a-49c9-8d23-e5efe9e8038b] 2025-04-09 08:29:36.805848 | orchestrator | 08:29:36.805 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=f5ce95d1-0f35-48ec-b05d-68892da228e4] 2025-04-09 08:29:36.817430 | orchestrator | 08:29:36.817 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-09 08:29:36.817742 | orchestrator | 08:29:36.817 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-09 08:29:36.820922 | orchestrator | 08:29:36.820 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=c8b54f09-402c-41ce-aa47-11dce3c4404f] 2025-04-09 08:29:36.826070 | orchestrator | 08:29:36.825 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-09 08:29:36.826313 | orchestrator | 08:29:36.826 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=9129b766eaf33d55edbb0bdb0d52a7918b6cc9a2] 2025-04-09 08:29:36.829637 | orchestrator | 08:29:36.829 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-09 08:29:36.831677 | orchestrator | 08:29:36.831 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-09 08:29:36.833272 | orchestrator | 08:29:36.833 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=36233ec3b87f2cf40ba21f6a5fc1c01039437b53] 2025-04-09 08:29:36.853606 | orchestrator | 08:29:36.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=de62fa23-408b-4a80-ad7d-9092afaf54f5] 2025-04-09 08:29:37.085827 | orchestrator | 08:29:37.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=1316f4e2-7b99-46a1-8513-1c51037dcfb5] 2025-04-09 08:29:42.205064 | orchestrator | 08:29:42.204 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-09 08:29:42.574870 | orchestrator | 08:29:42.574 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=3cfbe416-4a6d-4367-87e7-69d2ca3c8539] 2025-04-09 08:29:42.681212 | orchestrator | 08:29:42.680 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=33d40dd7-d081-4f46-be59-d19395c9fedd] 2025-04-09 08:29:42.690226 | orchestrator | 08:29:42.689 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-09 08:29:46.713828 | orchestrator | 08:29:46.713 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-09 08:29:46.717043 | orchestrator | 08:29:46.716 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-09 08:29:46.755340 | orchestrator | 08:29:46.755 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-09 08:29:46.818663 | orchestrator | 08:29:46.818 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-09 08:29:46.830067 | orchestrator | 08:29:46.829 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-09 08:29:47.074003 | orchestrator | 08:29:47.073 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=63879c55-8649-4733-9db3-eb1e1179a982] 2025-04-09 08:29:47.081356 | orchestrator | 08:29:47.081 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=d0f826ac-8a52-4d22-a9c2-0370e662c79b] 2025-04-09 08:29:47.151832 | orchestrator | 08:29:47.151 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=b820d287-9f63-4dcd-a7bf-6ad94049faf1] 2025-04-09 08:29:47.166529 | orchestrator | 08:29:47.165 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=09d0ce6a-ec63-45cb-bad8-c7bbb516d0e3] 2025-04-09 08:29:47.222740 | orchestrator | 08:29:47.222 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=a9284610-c146-4c9b-b466-97d72a6accd6] 2025-04-09 08:29:49.269405 | orchestrator | 08:29:49.269 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 6s [id=0032a3d1-5fc0-4b0d-a03f-59d064078137] 2025-04-09 08:29:49.276713 | orchestrator | 08:29:49.276 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-09 08:29:49.276951 | orchestrator | 08:29:49.276 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-09 08:29:49.279597 | orchestrator | 08:29:49.279 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-09 08:29:49.442870 | orchestrator | 08:29:49.442 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=1dec5cdb-e70e-4f55-b1e3-3d190e171eba] 2025-04-09 08:29:49.454004 | orchestrator | 08:29:49.453 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-09 08:29:49.454498 | orchestrator | 08:29:49.454 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-09 08:29:49.455392 | orchestrator | 08:29:49.455 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-09 08:29:49.456951 | orchestrator | 08:29:49.456 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-09 08:29:49.457937 | orchestrator | 08:29:49.457 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-09 08:29:49.459566 | orchestrator | 08:29:49.459 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b417e96f-4336-4e7d-96d8-76237a28e552] 2025-04-09 08:29:49.463595 | orchestrator | 08:29:49.463 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-09 08:29:49.466149 | orchestrator | 08:29:49.465 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-09 08:29:49.468277 | orchestrator | 08:29:49.468 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-09 08:29:49.474687 | orchestrator | 08:29:49.474 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-09 08:29:49.576439 | orchestrator | 08:29:49.576 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=33bfaa71-8618-4d22-bb72-551c59fd1ed3] 2025-04-09 08:29:49.589027 | orchestrator | 08:29:49.588 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-09 08:29:49.702113 | orchestrator | 08:29:49.701 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=f5344ccd-f613-4809-8714-ce8feda14920] 2025-04-09 08:29:49.715537 | orchestrator | 08:29:49.715 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-09 08:29:49.760131 | orchestrator | 08:29:49.759 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=aa5f49ea-a90d-43f6-a344-18c1f4f7bf07] 2025-04-09 08:29:49.771828 | orchestrator | 08:29:49.771 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-09 08:29:49.864757 | orchestrator | 08:29:49.864 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=d2477808-2eb4-4d9d-9586-086a1b1b788c] 2025-04-09 08:29:49.876180 | orchestrator | 08:29:49.876 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-09 08:29:49.981535 | orchestrator | 08:29:49.981 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=384dd17b-9d2d-445f-8394-cc2c6f0b6ddf] 2025-04-09 08:29:49.993987 | orchestrator | 08:29:49.993 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-09 08:29:50.105105 | orchestrator | 08:29:50.104 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=6ade2c02-f50e-4328-8ff1-231406c1fb95] 2025-04-09 08:29:50.111530 | orchestrator | 08:29:50.111 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-09 08:29:50.224724 | orchestrator | 08:29:50.224 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=fcbb30ca-1d9c-4392-835d-2b035cbf1425] 2025-04-09 08:29:50.231582 | orchestrator | 08:29:50.231 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-09 08:29:50.247708 | orchestrator | 08:29:50.247 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=69f55709-0c69-44c1-848c-c013ec336d46] 2025-04-09 08:29:50.370894 | orchestrator | 08:29:50.370 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=67148c96-7f06-4470-9766-bc3fa411b500] 2025-04-09 08:29:55.244002 | orchestrator | 08:29:55.243 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=12d3a9b4-a968-4e0f-8cf5-355c19b25367] 2025-04-09 08:29:55.426173 | orchestrator | 08:29:55.425 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=10c5e59c-962c-4bd3-a683-38844b2ca53c] 2025-04-09 08:29:55.532512 | orchestrator | 08:29:55.532 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=91ebb0c5-02bd-49da-9dde-7f78a3d3ef27] 2025-04-09 08:29:55.705186 | orchestrator | 08:29:55.704 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=d8b85722-36e3-418e-9ec2-210980647918] 2025-04-09 08:29:55.754399 | orchestrator | 08:29:55.754 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=1e2828a1-47ad-427f-8127-4fd2fdccbbda] 2025-04-09 08:29:55.820650 | orchestrator | 08:29:55.820 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=250162e4-45ec-42b4-9c5e-00b2748cf7d6] 2025-04-09 08:29:55.967330 | orchestrator | 08:29:55.967 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=3b982359-a3c9-43ea-abda-9a370441abb1] 2025-04-09 08:29:55.987450 | orchestrator | 08:29:55.985 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=ebc9562c-ddc0-46bf-b5d4-c24f50655ec0] 2025-04-09 08:29:56.006109 | orchestrator | 08:29:56.004 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-09 08:29:56.017784 | orchestrator | 08:29:56.017 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-09 08:29:56.021803 | orchestrator | 08:29:56.021 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-09 08:29:56.023013 | orchestrator | 08:29:56.022 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-09 08:29:56.029668 | orchestrator | 08:29:56.028 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-09 08:29:56.037016 | orchestrator | 08:29:56.036 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-09 08:29:56.039581 | orchestrator | 08:29:56.039 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-09 08:30:03.762701 | orchestrator | 08:30:03.762 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 8s [id=885e71e9-5536-4eff-8300-fc341171d4f0] 2025-04-09 08:30:03.777789 | orchestrator | 08:30:03.777 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-09 08:30:03.790359 | orchestrator | 08:30:03.790 STDOUT terraform: local_file.inventory: Creating... 2025-04-09 08:30:03.790459 | orchestrator | 08:30:03.790 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-09 08:30:03.795986 | orchestrator | 08:30:03.795 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=177ba9d57ab62075c6ad3b968129aaab9894543c] 2025-04-09 08:30:03.796702 | orchestrator | 08:30:03.796 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fc16b24d29d3dce1b5d3b8d965486aae66462f87] 2025-04-09 08:30:04.520251 | orchestrator | 08:30:04.519 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=885e71e9-5536-4eff-8300-fc341171d4f0] 2025-04-09 08:30:06.019614 | orchestrator | 08:30:06.019 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-09 08:30:06.028835 | orchestrator | 08:30:06.028 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-09 08:30:06.030902 | orchestrator | 08:30:06.030 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-09 08:30:06.034273 | orchestrator | 08:30:06.034 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-09 08:30:06.039430 | orchestrator | 08:30:06.039 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-09 08:30:06.040637 | orchestrator | 08:30:06.040 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-09 08:30:16.020985 | orchestrator | 08:30:16.020 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-09 08:30:16.029426 | orchestrator | 08:30:16.029 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-09 08:30:16.031432 | orchestrator | 08:30:16.031 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-09 08:30:16.035069 | orchestrator | 08:30:16.034 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-04-09 08:30:16.040118 | orchestrator | 08:30:16.039 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-09 08:30:16.041328 | orchestrator | 08:30:16.041 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-09 08:30:16.523048 | orchestrator | 08:30:16.522 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=aa83d328-3649-4808-84f4-1f3c4b43560a] 2025-04-09 08:30:16.586589 | orchestrator | 08:30:16.586 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=59cfeb79-d533-4a0c-a6d9-81ae4ace7f7c] 2025-04-09 08:30:16.646346 | orchestrator | 08:30:16.645 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=5bc91fa7-36d8-4a3a-a8a5-def17feae606] 2025-04-09 08:30:26.031894 | orchestrator | 08:30:26.031 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-04-09 08:30:26.041188 | orchestrator | 08:30:26.040 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-04-09 08:30:26.042293 | orchestrator | 08:30:26.042 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-04-09 08:30:26.806959 | orchestrator | 08:30:26.806 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=94c03ff5-0982-4f3e-a536-30274713b736] 2025-04-09 08:30:26.890618 | orchestrator | 08:30:26.890 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=e9ba475c-0d4d-48a2-829a-765fb3e1c02d] 2025-04-09 08:30:26.919143 | orchestrator | 08:30:26.918 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=f1506057-06df-4fa6-909d-9424b3d4b94a] 2025-04-09 08:30:26.938587 | orchestrator | 08:30:26.938 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-09 08:30:26.940567 | orchestrator | 08:30:26.940 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-09 08:30:26.947647 | orchestrator | 08:30:26.947 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-09 08:30:26.948299 | orchestrator | 08:30:26.948 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8184312541099634276] 2025-04-09 08:30:26.952128 | orchestrator | 08:30:26.952 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-09 08:30:26.952836 | orchestrator | 08:30:26.952 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-09 08:30:26.961343 | orchestrator | 08:30:26.961 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-09 08:30:26.964461 | orchestrator | 08:30:26.964 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-09 08:30:26.972941 | orchestrator | 08:30:26.972 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-09 08:30:26.975300 | orchestrator | 08:30:26.975 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-09 08:30:26.977244 | orchestrator | 08:30:26.977 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-09 08:30:26.981346 | orchestrator | 08:30:26.981 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-09 08:30:32.310451 | orchestrator | 08:30:32.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=59cfeb79-d533-4a0c-a6d9-81ae4ace7f7c/20a266e5-e4ad-4a38-8cc4-79e311575ecc] 2025-04-09 08:30:32.319880 | orchestrator | 08:30:32.319 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=94c03ff5-0982-4f3e-a536-30274713b736/91fc80ef-66f5-4e95-8e23-15913c9566bc] 2025-04-09 08:30:32.336722 | orchestrator | 08:30:32.336 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-09 08:30:32.341496 | orchestrator | 08:30:32.336 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-09 08:30:32.341550 | orchestrator | 08:30:32.341 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=f1506057-06df-4fa6-909d-9424b3d4b94a/4c16418f-b1ea-409a-91f3-2a744e80e58e] 2025-04-09 08:30:32.356035 | orchestrator | 08:30:32.355 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=5bc91fa7-36d8-4a3a-a8a5-def17feae606/7a5907a6-1dd9-4bc4-a636-241a5ff50248] 2025-04-09 08:30:32.359188 | orchestrator | 08:30:32.357 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-09 08:30:32.368547 | orchestrator | 08:30:32.368 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-09 08:30:32.370426 | orchestrator | 08:30:32.370 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=59cfeb79-d533-4a0c-a6d9-81ae4ace7f7c/c3189fe9-451f-4fc6-9bec-4c0706cd3177] 2025-04-09 08:30:32.379001 | orchestrator | 08:30:32.377 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-09 08:30:32.383047 | orchestrator | 08:30:32.377 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=aa83d328-3649-4808-84f4-1f3c4b43560a/de62fa23-408b-4a80-ad7d-9092afaf54f5] 2025-04-09 08:30:32.383090 | orchestrator | 08:30:32.382 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=94c03ff5-0982-4f3e-a536-30274713b736/9bf8d632-1c6b-449d-9785-1633e2bfb44c] 2025-04-09 08:30:32.390128 | orchestrator | 08:30:32.389 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-09 08:30:32.393378 | orchestrator | 08:30:32.393 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-09 08:30:32.425210 | orchestrator | 08:30:32.424 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=94c03ff5-0982-4f3e-a536-30274713b736/a877ba24-0f29-49c6-83ab-71ec08700986] 2025-04-09 08:30:32.436438 | orchestrator | 08:30:32.436 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-09 08:30:32.458641 | orchestrator | 08:30:32.458 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=5bc91fa7-36d8-4a3a-a8a5-def17feae606/67027c85-489c-4f7b-b006-8e76f82b0117] 2025-04-09 08:30:32.484411 | orchestrator | 08:30:32.484 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-09 08:30:32.492328 | orchestrator | 08:30:32.492 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=5bc91fa7-36d8-4a3a-a8a5-def17feae606/f5ce95d1-0f35-48ec-b05d-68892da228e4] 2025-04-09 08:30:37.722623 | orchestrator | 08:30:37.722 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=f1506057-06df-4fa6-909d-9424b3d4b94a/c8b54f09-402c-41ce-aa47-11dce3c4404f] 2025-04-09 08:30:37.733964 | orchestrator | 08:30:37.733 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=59cfeb79-d533-4a0c-a6d9-81ae4ace7f7c/0a277efc-ca83-41bc-9a13-7ec21996cbcf] 2025-04-09 08:30:37.756285 | orchestrator | 08:30:37.755 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=e9ba475c-0d4d-48a2-829a-765fb3e1c02d/d077077e-8074-4e28-961e-4d10ae0af6bd] 2025-04-09 08:30:37.764604 | orchestrator | 08:30:37.764 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=aa83d328-3649-4808-84f4-1f3c4b43560a/f700b5a9-7ff5-4b8f-bc97-a162213f2234] 2025-04-09 08:30:40.816233 | orchestrator | 08:30:40.815 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 9s [id=f1506057-06df-4fa6-909d-9424b3d4b94a/b3569f59-7c0a-49c9-8d23-e5efe9e8038b] 2025-04-09 08:30:40.846478 | orchestrator | 08:30:40.846 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=e9ba475c-0d4d-48a2-829a-765fb3e1c02d/80ef2f8b-b45e-4bed-a63c-5dbd52e64749] 2025-04-09 08:30:40.891415 | orchestrator | 08:30:40.891 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=aa83d328-3649-4808-84f4-1f3c4b43560a/909b970e-d6d0-4275-870f-4a8bf151615a] 2025-04-09 08:30:40.910994 | orchestrator | 08:30:40.910 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 9s [id=e9ba475c-0d4d-48a2-829a-765fb3e1c02d/c4741289-30df-4db6-9178-491638aa0447] 2025-04-09 08:30:42.483671 | orchestrator | 08:30:42.483 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-09 08:30:52.487220 | orchestrator | 08:30:52.486 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-04-09 08:30:53.152186 | orchestrator | 08:30:53.151 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=8ab686e4-414c-42c0-8322-c66eb352cd47] 2025-04-09 08:30:53.175837 | orchestrator | 08:30:53.175 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-09 08:30:53.175931 | orchestrator | 08:30:53.175 STDOUT terraform: Outputs: 2025-04-09 08:30:53.184287 | orchestrator | 08:30:53.175 STDOUT terraform: manager_address = 2025-04-09 08:30:53.184362 | orchestrator | 08:30:53.175 STDOUT terraform: private_key = 2025-04-09 08:30:53.338400 | orchestrator | changed 2025-04-09 08:30:53.377480 | 2025-04-09 08:30:53.377677 | TASK [Create infrastructure (stable)] 2025-04-09 08:30:53.472519 | orchestrator | skipping: Conditional result was False 2025-04-09 08:30:53.493603 | 2025-04-09 08:30:53.493794 | TASK [Fetch manager address] 2025-04-09 08:31:03.933079 | orchestrator | ok 2025-04-09 08:31:03.956356 | 2025-04-09 08:31:03.956548 | TASK [Set manager_host address] 2025-04-09 08:31:04.071924 | orchestrator | ok 2025-04-09 08:31:04.083929 | 2025-04-09 08:31:04.084050 | LOOP [Update ansible collections] 2025-04-09 08:31:04.845166 | orchestrator | changed 2025-04-09 08:31:05.524735 | orchestrator | changed 2025-04-09 08:31:05.546925 | 2025-04-09 08:31:05.547067 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-09 08:31:16.034890 | orchestrator | ok 2025-04-09 08:31:16.048656 | 2025-04-09 08:31:16.048753 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-09 08:32:16.101151 | orchestrator | ok 2025-04-09 08:32:16.169190 | 2025-04-09 08:32:16.169310 | TASK [Fetch manager ssh hostkey] 2025-04-09 08:32:17.256331 | orchestrator | Output suppressed because no_log was given 2025-04-09 08:32:17.266861 | 2025-04-09 08:32:17.266977 | TASK [Get ssh keypair from terraform environment] 2025-04-09 08:32:17.814169 | orchestrator | changed 2025-04-09 08:32:17.832541 | 2025-04-09 08:32:17.832684 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-09 08:32:17.884883 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-09 08:32:17.895225 | 2025-04-09 08:32:17.895329 | TASK [Run manager part 0] 2025-04-09 08:32:19.084134 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-09 08:32:19.157950 | orchestrator | 2025-04-09 08:32:21.189328 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-09 08:32:21.189386 | orchestrator | 2025-04-09 08:32:21.189409 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-09 08:32:21.189427 | orchestrator | ok: [testbed-manager] 2025-04-09 08:32:23.194887 | orchestrator | 2025-04-09 08:32:23.194954 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-09 08:32:23.194972 | orchestrator | 2025-04-09 08:32:23.194982 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:32:23.194999 | orchestrator | ok: [testbed-manager] 2025-04-09 08:32:23.817484 | orchestrator | 2025-04-09 08:32:23.817519 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-09 08:32:23.817533 | orchestrator | ok: [testbed-manager] 2025-04-09 08:32:23.857882 | orchestrator | 2025-04-09 08:32:23.857897 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-09 08:32:23.857906 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:32:23.884892 | orchestrator | 2025-04-09 08:32:23.884905 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-09 08:32:23.884913 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:32:23.907346 | orchestrator | 2025-04-09 08:32:23.907358 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-09 08:32:23.907367 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:32:23.927839 | orchestrator | 2025-04-09 08:32:23.927851 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-09 08:32:23.927859 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:32:23.948135 | orchestrator | 2025-04-09 08:32:23.948147 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-09 08:32:23.948155 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:32:23.968579 | orchestrator | 2025-04-09 08:32:23.968591 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-09 08:32:23.968599 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:32:23.988607 | orchestrator | 2025-04-09 08:32:23.988618 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-09 08:32:23.988626 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:32:24.784181 | orchestrator | 2025-04-09 08:32:24.784220 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-09 08:32:24.784234 | orchestrator | changed: [testbed-manager] 2025-04-09 08:35:16.842426 | orchestrator | 2025-04-09 08:35:16.842554 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-09 08:35:16.842598 | orchestrator | changed: [testbed-manager] 2025-04-09 08:36:33.422109 | orchestrator | 2025-04-09 08:36:33.422167 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-09 08:36:33.422187 | orchestrator | changed: [testbed-manager] 2025-04-09 08:36:55.207171 | orchestrator | 2025-04-09 08:36:55.207286 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-09 08:36:55.207324 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:03.929950 | orchestrator | 2025-04-09 08:37:03.930008 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-09 08:37:03.930101 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:03.975486 | orchestrator | 2025-04-09 08:37:03.975528 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-09 08:37:03.975552 | orchestrator | ok: [testbed-manager] 2025-04-09 08:37:04.764094 | orchestrator | 2025-04-09 08:37:04.764163 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-09 08:37:04.764188 | orchestrator | ok: [testbed-manager] 2025-04-09 08:37:05.510090 | orchestrator | 2025-04-09 08:37:05.510182 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-09 08:37:05.510224 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:11.623011 | orchestrator | 2025-04-09 08:37:11.623065 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-09 08:37:11.623086 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:17.431666 | orchestrator | 2025-04-09 08:37:17.432315 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-09 08:37:17.432360 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:20.092856 | orchestrator | 2025-04-09 08:37:20.092910 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-09 08:37:20.092929 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:21.950213 | orchestrator | 2025-04-09 08:37:21.950283 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-09 08:37:21.950312 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:23.091987 | orchestrator | 2025-04-09 08:37:23.092080 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-09 08:37:23.092112 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-09 08:37:23.135389 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-09 08:37:23.135435 | orchestrator | 2025-04-09 08:37:23.135443 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-09 08:37:23.135457 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-09 08:37:30.628848 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-09 08:37:30.629091 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-09 08:37:30.629114 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-09 08:37:30.629145 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-09 08:37:31.197168 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-09 08:37:31.197256 | orchestrator | 2025-04-09 08:37:31.197274 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-09 08:37:31.197301 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:50.145833 | orchestrator | 2025-04-09 08:37:50.145940 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-09 08:37:50.145974 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-09 08:37:52.527258 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-09 08:37:52.527336 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-09 08:37:52.527351 | orchestrator | 2025-04-09 08:37:52.527365 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-09 08:37:52.527389 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-09 08:37:53.924611 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-09 08:37:53.924703 | orchestrator | 2025-04-09 08:37:53.924749 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-09 08:37:53.924765 | orchestrator | 2025-04-09 08:37:53.924780 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:37:53.924808 | orchestrator | ok: [testbed-manager] 2025-04-09 08:37:53.967741 | orchestrator | 2025-04-09 08:37:53.968320 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-09 08:37:53.968353 | orchestrator | ok: [testbed-manager] 2025-04-09 08:37:54.026286 | orchestrator | 2025-04-09 08:37:54.026363 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-09 08:37:54.026393 | orchestrator | ok: [testbed-manager] 2025-04-09 08:37:54.797143 | orchestrator | 2025-04-09 08:37:54.797216 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-09 08:37:54.797248 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:55.491278 | orchestrator | 2025-04-09 08:37:55.491327 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-09 08:37:55.491344 | orchestrator | changed: [testbed-manager] 2025-04-09 08:37:56.929698 | orchestrator | 2025-04-09 08:37:56.929807 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-09 08:37:56.929829 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-09 08:37:58.335197 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-09 08:37:58.335253 | orchestrator | 2025-04-09 08:37:58.335260 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-09 08:37:58.335274 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:00.108392 | orchestrator | 2025-04-09 08:38:00.109699 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-09 08:38:00.109761 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-09 08:38:00.668688 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-09 08:38:00.668814 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-09 08:38:00.668838 | orchestrator | 2025-04-09 08:38:00.668854 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-09 08:38:00.668887 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:00.735621 | orchestrator | 2025-04-09 08:38:00.735683 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-09 08:38:00.735699 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:01.581952 | orchestrator | 2025-04-09 08:38:01.582063 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-09 08:38:01.582095 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:38:01.611259 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:01.611328 | orchestrator | 2025-04-09 08:38:01.611343 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-09 08:38:01.611369 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:01.634128 | orchestrator | 2025-04-09 08:38:01.634177 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-09 08:38:01.634197 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:01.658946 | orchestrator | 2025-04-09 08:38:01.658989 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-09 08:38:01.659005 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:01.704366 | orchestrator | 2025-04-09 08:38:01.704426 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-09 08:38:01.704450 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:02.447907 | orchestrator | 2025-04-09 08:38:02.448022 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-09 08:38:02.448073 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:03.868533 | orchestrator | 2025-04-09 08:38:03.868627 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-09 08:38:03.868645 | orchestrator | 2025-04-09 08:38:03.868661 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:38:03.868689 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:04.823427 | orchestrator | 2025-04-09 08:38:04.823523 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-09 08:38:04.823556 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:04.919625 | orchestrator | 2025-04-09 08:38:04.919835 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:38:04.919861 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-09 08:38:04.919875 | orchestrator | 2025-04-09 08:38:05.117333 | orchestrator | changed 2025-04-09 08:38:05.133737 | 2025-04-09 08:38:05.133850 | TASK [Point out that the log in on the manager is now possible] 2025-04-09 08:38:05.203981 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-09 08:38:05.211454 | 2025-04-09 08:38:05.211531 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-09 08:38:05.253693 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-09 08:38:05.261678 | 2025-04-09 08:38:05.261757 | TASK [Run manager part 1 + 2] 2025-04-09 08:38:06.086599 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-09 08:38:06.144188 | orchestrator | 2025-04-09 08:38:08.659251 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-09 08:38:08.659294 | orchestrator | 2025-04-09 08:38:08.659317 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:38:08.659332 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:08.691956 | orchestrator | 2025-04-09 08:38:08.692010 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-09 08:38:08.692034 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:08.727789 | orchestrator | 2025-04-09 08:38:08.727832 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-09 08:38:08.727851 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:08.760221 | orchestrator | 2025-04-09 08:38:08.760270 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-09 08:38:08.760289 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:08.819181 | orchestrator | 2025-04-09 08:38:08.819225 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-09 08:38:08.819239 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:08.871005 | orchestrator | 2025-04-09 08:38:08.871054 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-09 08:38:08.871072 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:08.922772 | orchestrator | 2025-04-09 08:38:08.922807 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-09 08:38:08.922819 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-09 08:38:09.604351 | orchestrator | 2025-04-09 08:38:09.604398 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-09 08:38:09.604417 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:09.648860 | orchestrator | 2025-04-09 08:38:09.648903 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-09 08:38:09.648920 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:10.986897 | orchestrator | 2025-04-09 08:38:10.986961 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-09 08:38:10.986987 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:11.549219 | orchestrator | 2025-04-09 08:38:11.549270 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-09 08:38:11.549289 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:12.723201 | orchestrator | 2025-04-09 08:38:12.723352 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-09 08:38:12.723370 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:25.836329 | orchestrator | 2025-04-09 08:38:25.836553 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-09 08:38:25.836590 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:26.471646 | orchestrator | 2025-04-09 08:38:26.471741 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-09 08:38:26.471767 | orchestrator | ok: [testbed-manager] 2025-04-09 08:38:26.546542 | orchestrator | 2025-04-09 08:38:26.546599 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-09 08:38:26.546626 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:27.453255 | orchestrator | 2025-04-09 08:38:27.453313 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-09 08:38:27.453331 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:28.421047 | orchestrator | 2025-04-09 08:38:28.421144 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-09 08:38:28.421175 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:28.989969 | orchestrator | 2025-04-09 08:38:28.990074 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-09 08:38:28.990106 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:29.028675 | orchestrator | 2025-04-09 08:38:29.028768 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-09 08:38:29.028809 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-09 08:38:33.359929 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-09 08:38:33.360028 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-09 08:38:33.360049 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-09 08:38:33.360078 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:42.168890 | orchestrator | 2025-04-09 08:38:42.168930 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-09 08:38:42.168945 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-09 08:38:43.202211 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-09 08:38:43.202249 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-09 08:38:43.202258 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-09 08:38:43.202266 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-09 08:38:43.202273 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-09 08:38:43.202281 | orchestrator | 2025-04-09 08:38:43.202288 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-09 08:38:43.202308 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:43.242001 | orchestrator | 2025-04-09 08:38:43.242065 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-09 08:38:43.242081 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:38:46.380519 | orchestrator | 2025-04-09 08:38:46.380576 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-09 08:38:46.380599 | orchestrator | changed: [testbed-manager] 2025-04-09 08:38:46.413653 | orchestrator | 2025-04-09 08:38:46.413734 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-09 08:38:46.413762 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:40:23.170290 | orchestrator | 2025-04-09 08:40:23.170404 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-09 08:40:23.170440 | orchestrator | changed: [testbed-manager] 2025-04-09 08:40:24.355246 | orchestrator | 2025-04-09 08:40:24.355338 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-09 08:40:24.355370 | orchestrator | ok: [testbed-manager] 2025-04-09 08:40:24.448552 | orchestrator | 2025-04-09 08:40:24.448810 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:40:24.448839 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-09 08:40:24.448854 | orchestrator | 2025-04-09 08:40:24.866995 | orchestrator | changed 2025-04-09 08:40:24.883255 | 2025-04-09 08:40:24.883391 | TASK [Reboot manager] 2025-04-09 08:40:26.426742 | orchestrator | changed 2025-04-09 08:40:26.437190 | 2025-04-09 08:40:26.438297 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-09 08:40:41.654773 | orchestrator | ok 2025-04-09 08:40:41.665258 | 2025-04-09 08:40:41.665387 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-09 08:41:41.717622 | orchestrator | ok 2025-04-09 08:41:41.728819 | 2025-04-09 08:41:41.728954 | TASK [Deploy manager + bootstrap nodes] 2025-04-09 08:41:44.216512 | orchestrator | 2025-04-09 08:41:44.220058 | orchestrator | # DEPLOY MANAGER 2025-04-09 08:41:44.220096 | orchestrator | 2025-04-09 08:41:44.220112 | orchestrator | + set -e 2025-04-09 08:41:44.220153 | orchestrator | + echo 2025-04-09 08:41:44.220171 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-09 08:41:44.220186 | orchestrator | + echo 2025-04-09 08:41:44.220207 | orchestrator | + cat /opt/manager-vars.sh 2025-04-09 08:41:44.220238 | orchestrator | export NUMBER_OF_NODES=6 2025-04-09 08:41:44.220365 | orchestrator | 2025-04-09 08:41:44.220385 | orchestrator | export CEPH_VERSION=quincy 2025-04-09 08:41:44.220398 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-09 08:41:44.220410 | orchestrator | export MANAGER_VERSION=latest 2025-04-09 08:41:44.220423 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-09 08:41:44.220436 | orchestrator | 2025-04-09 08:41:44.220449 | orchestrator | export ARA=false 2025-04-09 08:41:44.220462 | orchestrator | export TEMPEST=false 2025-04-09 08:41:44.220474 | orchestrator | export IS_ZUUL=true 2025-04-09 08:41:44.220486 | orchestrator | 2025-04-09 08:41:44.220499 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-04-09 08:41:44.220513 | orchestrator | export EXTERNAL_API=false 2025-04-09 08:41:44.220526 | orchestrator | 2025-04-09 08:41:44.220538 | orchestrator | export IMAGE_USER=ubuntu 2025-04-09 08:41:44.220550 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-09 08:41:44.220564 | orchestrator | 2025-04-09 08:41:44.220576 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-09 08:41:44.220593 | orchestrator | 2025-04-09 08:41:44.221646 | orchestrator | + echo 2025-04-09 08:41:44.221667 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-09 08:41:44.221685 | orchestrator | ++ export INTERACTIVE=false 2025-04-09 08:41:44.221907 | orchestrator | ++ INTERACTIVE=false 2025-04-09 08:41:44.221945 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-09 08:41:44.221983 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-09 08:41:44.222003 | orchestrator | + source /opt/manager-vars.sh 2025-04-09 08:41:44.222052 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-09 08:41:44.222089 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-09 08:41:44.222102 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-09 08:41:44.222114 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-09 08:41:44.222127 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-09 08:41:44.222140 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-09 08:41:44.222159 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-09 08:41:44.222171 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-09 08:41:44.222184 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-09 08:41:44.222217 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-09 08:41:44.222229 | orchestrator | ++ export ARA=false 2025-04-09 08:41:44.222242 | orchestrator | ++ ARA=false 2025-04-09 08:41:44.222255 | orchestrator | ++ export TEMPEST=false 2025-04-09 08:41:44.222267 | orchestrator | ++ TEMPEST=false 2025-04-09 08:41:44.222284 | orchestrator | ++ export IS_ZUUL=true 2025-04-09 08:41:44.277508 | orchestrator | ++ IS_ZUUL=true 2025-04-09 08:41:44.277554 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-04-09 08:41:44.277567 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-04-09 08:41:44.277589 | orchestrator | ++ export EXTERNAL_API=false 2025-04-09 08:41:44.277602 | orchestrator | ++ EXTERNAL_API=false 2025-04-09 08:41:44.277615 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-09 08:41:44.277661 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-09 08:41:44.277675 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-09 08:41:44.277688 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-09 08:41:44.277703 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-09 08:41:44.277716 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-09 08:41:44.277729 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-09 08:41:44.277758 | orchestrator | + docker version 2025-04-09 08:41:44.559948 | orchestrator | Client: Docker Engine - Community 2025-04-09 08:41:44.560021 | orchestrator | Version: 27.5.1 2025-04-09 08:41:44.560042 | orchestrator | API version: 1.47 2025-04-09 08:41:44.560054 | orchestrator | Go version: go1.22.11 2025-04-09 08:41:44.560067 | orchestrator | Git commit: 9f9e405 2025-04-09 08:41:44.560079 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-04-09 08:41:44.560092 | orchestrator | OS/Arch: linux/amd64 2025-04-09 08:41:44.560105 | orchestrator | Context: default 2025-04-09 08:41:44.560117 | orchestrator | 2025-04-09 08:41:44.560129 | orchestrator | Server: Docker Engine - Community 2025-04-09 08:41:44.560142 | orchestrator | Engine: 2025-04-09 08:41:44.560154 | orchestrator | Version: 27.5.1 2025-04-09 08:41:44.560166 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-04-09 08:41:44.560179 | orchestrator | Go version: go1.22.11 2025-04-09 08:41:44.560192 | orchestrator | Git commit: 4c9b3b0 2025-04-09 08:41:44.560227 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-04-09 08:41:44.560240 | orchestrator | OS/Arch: linux/amd64 2025-04-09 08:41:44.560252 | orchestrator | Experimental: false 2025-04-09 08:41:44.560264 | orchestrator | containerd: 2025-04-09 08:41:44.560281 | orchestrator | Version: 1.7.27 2025-04-09 08:41:44.564870 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-09 08:41:44.564893 | orchestrator | runc: 2025-04-09 08:41:44.564906 | orchestrator | Version: 1.2.5 2025-04-09 08:41:44.564920 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-09 08:41:44.564932 | orchestrator | docker-init: 2025-04-09 08:41:44.564945 | orchestrator | Version: 0.19.0 2025-04-09 08:41:44.564957 | orchestrator | GitCommit: de40ad0 2025-04-09 08:41:44.564974 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-09 08:41:44.575728 | orchestrator | + set -e 2025-04-09 08:41:44.575805 | orchestrator | + source /opt/manager-vars.sh 2025-04-09 08:41:44.575831 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-09 08:41:44.575844 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-09 08:41:44.575857 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-09 08:41:44.575869 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-09 08:41:44.575882 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-09 08:41:44.575894 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-09 08:41:44.575906 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-09 08:41:44.575919 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-09 08:41:44.575932 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-09 08:41:44.575944 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-09 08:41:44.575956 | orchestrator | ++ export ARA=false 2025-04-09 08:41:44.575969 | orchestrator | ++ ARA=false 2025-04-09 08:41:44.575981 | orchestrator | ++ export TEMPEST=false 2025-04-09 08:41:44.575993 | orchestrator | ++ TEMPEST=false 2025-04-09 08:41:44.576053 | orchestrator | ++ export IS_ZUUL=true 2025-04-09 08:41:44.576067 | orchestrator | ++ IS_ZUUL=true 2025-04-09 08:41:44.576080 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-04-09 08:41:44.576092 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-04-09 08:41:44.576111 | orchestrator | ++ export EXTERNAL_API=false 2025-04-09 08:41:44.576124 | orchestrator | ++ EXTERNAL_API=false 2025-04-09 08:41:44.576136 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-09 08:41:44.576149 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-09 08:41:44.576166 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-09 08:41:44.583811 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-09 08:41:44.583842 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-09 08:41:44.583855 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-09 08:41:44.583867 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-09 08:41:44.583880 | orchestrator | ++ export INTERACTIVE=false 2025-04-09 08:41:44.583892 | orchestrator | ++ INTERACTIVE=false 2025-04-09 08:41:44.583905 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-09 08:41:44.583917 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-09 08:41:44.583930 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-09 08:41:44.583943 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-09 08:41:44.583956 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-04-09 08:41:44.583974 | orchestrator | + set -e 2025-04-09 08:41:44.584329 | orchestrator | + VERSION=quincy 2025-04-09 08:41:44.585024 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-04-09 08:41:44.591390 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-04-09 08:41:44.597304 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-04-09 08:41:44.597334 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-04-09 08:41:44.604418 | orchestrator | + set -e 2025-04-09 08:41:44.605834 | orchestrator | + VERSION=2024.1 2025-04-09 08:41:44.605858 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-04-09 08:41:44.609190 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-04-09 08:41:44.614901 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-04-09 08:41:44.614927 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-09 08:41:44.616270 | orchestrator | ++ semver latest 7.0.0 2025-04-09 08:41:44.686454 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-09 08:41:44.729805 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-09 08:41:44.729835 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-09 08:41:44.729848 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-09 08:41:44.729883 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-09 08:41:44.731154 | orchestrator | + source /opt/venv/bin/activate 2025-04-09 08:41:44.732510 | orchestrator | ++ deactivate nondestructive 2025-04-09 08:41:44.732725 | orchestrator | ++ '[' -n '' ']' 2025-04-09 08:41:44.732744 | orchestrator | ++ '[' -n '' ']' 2025-04-09 08:41:44.732795 | orchestrator | ++ hash -r 2025-04-09 08:41:44.732868 | orchestrator | ++ '[' -n '' ']' 2025-04-09 08:41:44.732882 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-09 08:41:44.732894 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-09 08:41:44.732928 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-09 08:41:44.732945 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-09 08:41:44.733022 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-09 08:41:44.733037 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-09 08:41:44.733050 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-09 08:41:44.733081 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-09 08:41:44.733100 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-09 08:41:44.733156 | orchestrator | ++ export PATH 2025-04-09 08:41:44.733173 | orchestrator | ++ '[' -n '' ']' 2025-04-09 08:41:44.733426 | orchestrator | ++ '[' -z '' ']' 2025-04-09 08:41:44.733597 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-09 08:41:44.733612 | orchestrator | ++ PS1='(venv) ' 2025-04-09 08:41:44.733624 | orchestrator | ++ export PS1 2025-04-09 08:41:44.733652 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-09 08:41:44.733665 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-09 08:41:44.733678 | orchestrator | ++ hash -r 2025-04-09 08:41:44.733695 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-09 08:41:46.043161 | orchestrator | 2025-04-09 08:41:46.604668 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-09 08:41:46.604739 | orchestrator | 2025-04-09 08:41:46.604756 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-09 08:41:46.604781 | orchestrator | ok: [testbed-manager] 2025-04-09 08:41:47.635788 | orchestrator | 2025-04-09 08:41:47.635890 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-09 08:41:47.635924 | orchestrator | changed: [testbed-manager] 2025-04-09 08:41:50.137134 | orchestrator | 2025-04-09 08:41:50.137251 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-09 08:41:50.137268 | orchestrator | 2025-04-09 08:41:50.137280 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:41:50.137309 | orchestrator | ok: [testbed-manager] 2025-04-09 08:41:55.446422 | orchestrator | 2025-04-09 08:41:55.446536 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-09 08:41:55.446562 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-09 08:43:11.358637 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.7.2) 2025-04-09 08:43:11.358765 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:quincy) 2025-04-09 08:43:11.358823 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-04-09 08:43:11.358836 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.1) 2025-04-09 08:43:11.358847 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-04-09 08:43:11.358859 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-04-09 08:43:11.358869 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-04-09 08:43:11.358880 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-04-09 08:43:11.358890 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.8-alpine) 2025-04-09 08:43:11.358900 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.5) 2025-04-09 08:43:11.358910 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.19.0) 2025-04-09 08:43:11.358921 | orchestrator | 2025-04-09 08:43:11.358932 | orchestrator | TASK [Check status] ************************************************************ 2025-04-09 08:43:11.358979 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-09 08:43:11.410193 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-09 08:43:11.410291 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-09 08:43:11.410308 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-09 08:43:11.410322 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j253395237522.1547', 'results_file': '/home/dragon/.ansible_async/j253395237522.1547', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410349 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j721172490354.1572', 'results_file': '/home/dragon/.ansible_async/j721172490354.1572', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410362 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-09 08:43:11.410379 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j533213105870.1597', 'results_file': '/home/dragon/.ansible_async/j533213105870.1597', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410392 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j456867611526.1628', 'results_file': '/home/dragon/.ansible_async/j456867611526.1628', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410408 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-09 08:43:11.410421 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j713154404506.1661', 'results_file': '/home/dragon/.ansible_async/j713154404506.1661', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410433 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j678898430214.1693', 'results_file': '/home/dragon/.ansible_async/j678898430214.1693', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410445 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-09 08:43:11.410458 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j872460444078.1725', 'results_file': '/home/dragon/.ansible_async/j872460444078.1725', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410473 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j935409738735.1758', 'results_file': '/home/dragon/.ansible_async/j935409738735.1758', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410485 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j340445207882.1794', 'results_file': '/home/dragon/.ansible_async/j340445207882.1794', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410497 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j668058035839.1826', 'results_file': '/home/dragon/.ansible_async/j668058035839.1826', 'changed': True, 'item': 'index.docker.io/library/postgres:16.8-alpine', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410509 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j478138819627.1859', 'results_file': '/home/dragon/.ansible_async/j478138819627.1859', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.5', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410541 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j356329733564.1900', 'results_file': '/home/dragon/.ansible_async/j356329733564.1900', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.19.0', 'ansible_loop_var': 'item'}) 2025-04-09 08:43:11.410554 | orchestrator | 2025-04-09 08:43:11.410567 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-09 08:43:11.410594 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:11.883956 | orchestrator | 2025-04-09 08:43:11.884061 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-09 08:43:11.884099 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:12.226361 | orchestrator | 2025-04-09 08:43:12.226403 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-09 08:43:12.226426 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:12.561549 | orchestrator | 2025-04-09 08:43:12.561724 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-09 08:43:12.561764 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:12.619575 | orchestrator | 2025-04-09 08:43:12.619689 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-09 08:43:12.619745 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:43:12.957854 | orchestrator | 2025-04-09 08:43:12.957944 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-09 08:43:12.957975 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:13.138232 | orchestrator | 2025-04-09 08:43:13.138330 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-09 08:43:13.138360 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:43:14.975956 | orchestrator | 2025-04-09 08:43:14.976088 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-09 08:43:14.976109 | orchestrator | 2025-04-09 08:43:14.976127 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:43:14.976159 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:15.185906 | orchestrator | 2025-04-09 08:43:15.186078 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-09 08:43:15.186116 | orchestrator | 2025-04-09 08:43:15.286672 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-09 08:43:15.286712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-09 08:43:16.402737 | orchestrator | 2025-04-09 08:43:16.402857 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-09 08:43:16.403786 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-09 08:43:18.256311 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-09 08:43:18.256416 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-09 08:43:18.256434 | orchestrator | 2025-04-09 08:43:18.256450 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-09 08:43:18.256480 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-09 08:43:18.889858 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-09 08:43:18.889926 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-09 08:43:18.889942 | orchestrator | 2025-04-09 08:43:18.889958 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-09 08:43:18.889985 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:43:19.559058 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:19.559154 | orchestrator | 2025-04-09 08:43:19.559175 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-09 08:43:19.559205 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:43:19.641832 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:19.641877 | orchestrator | 2025-04-09 08:43:19.641892 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-09 08:43:19.641918 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:43:20.027383 | orchestrator | 2025-04-09 08:43:20.027483 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-09 08:43:20.027511 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:20.141938 | orchestrator | 2025-04-09 08:43:20.141984 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-09 08:43:20.142008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-09 08:43:21.225926 | orchestrator | 2025-04-09 08:43:21.226086 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-09 08:43:21.226126 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:22.053858 | orchestrator | 2025-04-09 08:43:22.053934 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-09 08:43:22.053961 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:25.231007 | orchestrator | 2025-04-09 08:43:25.231097 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-09 08:43:25.231129 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:25.589267 | orchestrator | 2025-04-09 08:43:25.589342 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-09 08:43:25.589370 | orchestrator | 2025-04-09 08:43:25.697791 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-09 08:43:25.697839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-09 08:43:28.325375 | orchestrator | 2025-04-09 08:43:28.325492 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-09 08:43:28.325527 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:28.479478 | orchestrator | 2025-04-09 08:43:28.479579 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-09 08:43:28.479669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-09 08:43:29.663589 | orchestrator | 2025-04-09 08:43:29.663731 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-09 08:43:29.663765 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-09 08:43:29.765002 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-09 08:43:29.765057 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-09 08:43:29.765072 | orchestrator | 2025-04-09 08:43:29.765087 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-09 08:43:29.765112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-09 08:43:30.423405 | orchestrator | 2025-04-09 08:43:30.423485 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-09 08:43:30.423514 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-09 08:43:31.068540 | orchestrator | 2025-04-09 08:43:31.068671 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-09 08:43:31.068719 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:31.754505 | orchestrator | 2025-04-09 08:43:31.754642 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-09 08:43:31.754684 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:43:32.187587 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:32.187721 | orchestrator | 2025-04-09 08:43:32.187741 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-09 08:43:32.187771 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:32.569176 | orchestrator | 2025-04-09 08:43:32.569248 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-09 08:43:32.569275 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:32.640004 | orchestrator | 2025-04-09 08:43:32.640061 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-09 08:43:32.640086 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:43:33.303446 | orchestrator | 2025-04-09 08:43:33.303565 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-09 08:43:33.303685 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:33.424908 | orchestrator | 2025-04-09 08:43:33.424999 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-09 08:43:33.425031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-09 08:43:34.192004 | orchestrator | 2025-04-09 08:43:34.192118 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-09 08:43:34.192164 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-09 08:43:34.888351 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-09 08:43:34.888451 | orchestrator | 2025-04-09 08:43:34.888468 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-09 08:43:34.888495 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-09 08:43:35.568053 | orchestrator | 2025-04-09 08:43:35.568127 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-09 08:43:35.568154 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:35.629337 | orchestrator | 2025-04-09 08:43:35.629363 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-09 08:43:35.629381 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:43:36.314163 | orchestrator | 2025-04-09 08:43:36.314269 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-09 08:43:36.314302 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:38.246628 | orchestrator | 2025-04-09 08:43:38.246758 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-09 08:43:38.246796 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:43:44.813634 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:43:44.813758 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:43:44.813775 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:44.813790 | orchestrator | 2025-04-09 08:43:44.813804 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-09 08:43:44.813836 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-09 08:43:45.462393 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-09 08:43:45.462508 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-09 08:43:45.462527 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-09 08:43:45.462543 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-09 08:43:45.462559 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-09 08:43:45.462573 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-09 08:43:45.462587 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-09 08:43:45.462643 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-09 08:43:45.462658 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-09 08:43:45.462672 | orchestrator | 2025-04-09 08:43:45.462687 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-09 08:43:45.462720 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-09 08:43:45.626008 | orchestrator | 2025-04-09 08:43:45.626176 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-09 08:43:45.626209 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-09 08:43:46.374530 | orchestrator | 2025-04-09 08:43:46.374659 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-09 08:43:46.374691 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:46.997858 | orchestrator | 2025-04-09 08:43:46.997912 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-09 08:43:46.997939 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:47.767726 | orchestrator | 2025-04-09 08:43:47.767791 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-09 08:43:47.767819 | orchestrator | changed: [testbed-manager] 2025-04-09 08:43:49.981629 | orchestrator | 2025-04-09 08:43:49.982329 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-09 08:43:49.982371 | orchestrator | ok: [testbed-manager] 2025-04-09 08:43:50.989002 | orchestrator | 2025-04-09 08:43:50.989084 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-09 08:43:50.989113 | orchestrator | ok: [testbed-manager] 2025-04-09 08:44:13.190188 | orchestrator | 2025-04-09 08:44:13.190330 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-09 08:44:13.190378 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-09 08:44:13.280761 | orchestrator | ok: [testbed-manager] 2025-04-09 08:44:13.280837 | orchestrator | 2025-04-09 08:44:13.280855 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-09 08:44:13.280883 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:13.345908 | orchestrator | 2025-04-09 08:44:13.345937 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-09 08:44:13.345952 | orchestrator | 2025-04-09 08:44:13.345966 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-09 08:44:13.345986 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:13.440291 | orchestrator | 2025-04-09 08:44:13.440324 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-09 08:44:13.440345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-09 08:44:14.347129 | orchestrator | 2025-04-09 08:44:14.347226 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-09 08:44:14.347262 | orchestrator | ok: [testbed-manager] 2025-04-09 08:44:14.445374 | orchestrator | 2025-04-09 08:44:14.445425 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-09 08:44:14.445451 | orchestrator | ok: [testbed-manager] 2025-04-09 08:44:14.521630 | orchestrator | 2025-04-09 08:44:14.521663 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-09 08:44:14.521685 | orchestrator | ok: [testbed-manager] => { 2025-04-09 08:44:15.184722 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-09 08:44:15.184768 | orchestrator | } 2025-04-09 08:44:15.184783 | orchestrator | 2025-04-09 08:44:15.184798 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-09 08:44:15.184820 | orchestrator | ok: [testbed-manager] 2025-04-09 08:44:16.006539 | orchestrator | 2025-04-09 08:44:16.006703 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-09 08:44:16.006743 | orchestrator | ok: [testbed-manager] 2025-04-09 08:44:16.086552 | orchestrator | 2025-04-09 08:44:16.086685 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-09 08:44:16.086717 | orchestrator | ok: [testbed-manager] 2025-04-09 08:44:16.158319 | orchestrator | 2025-04-09 08:44:16.158350 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-09 08:44:16.158372 | orchestrator | ok: [testbed-manager] => { 2025-04-09 08:44:16.225321 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-09 08:44:16.225351 | orchestrator | } 2025-04-09 08:44:16.225364 | orchestrator | 2025-04-09 08:44:16.225377 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-09 08:44:16.225395 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:16.277221 | orchestrator | 2025-04-09 08:44:16.277251 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-09 08:44:16.277271 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:16.345645 | orchestrator | 2025-04-09 08:44:16.345678 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-09 08:44:16.345698 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:16.417958 | orchestrator | 2025-04-09 08:44:16.417988 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-09 08:44:16.418007 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:16.484339 | orchestrator | 2025-04-09 08:44:16.484366 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-09 08:44:16.484384 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:16.552053 | orchestrator | 2025-04-09 08:44:16.552085 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-09 08:44:16.552104 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:44:17.825577 | orchestrator | 2025-04-09 08:44:17.825695 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-09 08:44:17.825736 | orchestrator | changed: [testbed-manager] 2025-04-09 08:44:17.933217 | orchestrator | 2025-04-09 08:44:17.933250 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-09 08:44:17.933271 | orchestrator | ok: [testbed-manager] 2025-04-09 08:45:18.014808 | orchestrator | 2025-04-09 08:45:18.014936 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-09 08:45:18.014973 | orchestrator | Pausing for 60 seconds 2025-04-09 08:45:18.117722 | orchestrator | changed: [testbed-manager] 2025-04-09 08:45:18.117798 | orchestrator | 2025-04-09 08:45:18.117816 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-09 08:45:18.117846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-09 08:49:09.001535 | orchestrator | 2025-04-09 08:49:09.001670 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-09 08:49:09.001709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-09 08:49:11.058591 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-09 08:49:11.058698 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-09 08:49:11.058713 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-09 08:49:11.058727 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-09 08:49:11.058739 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-09 08:49:11.058752 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-09 08:49:11.058766 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-09 08:49:11.058789 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-09 08:49:11.058812 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-09 08:49:11.058834 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-09 08:49:11.058856 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-09 08:49:11.058877 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-09 08:49:11.058898 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-09 08:49:11.058922 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-09 08:49:11.058947 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-09 08:49:11.058970 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-09 08:49:11.058995 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-04-09 08:49:11.059013 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-04-09 08:49:11.059026 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-04-09 08:49:11.059039 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-04-09 08:49:11.059052 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-04-09 08:49:11.059095 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:11.059109 | orchestrator | 2025-04-09 08:49:11.059123 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-09 08:49:11.059138 | orchestrator | 2025-04-09 08:49:11.059154 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:49:11.059185 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:11.168374 | orchestrator | 2025-04-09 08:49:11.168439 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-09 08:49:11.168465 | orchestrator | 2025-04-09 08:49:11.234872 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-09 08:49:11.234954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-09 08:49:13.011802 | orchestrator | 2025-04-09 08:49:13.011903 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-09 08:49:13.011935 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:13.071923 | orchestrator | 2025-04-09 08:49:13.071963 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-09 08:49:13.071986 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:13.165791 | orchestrator | 2025-04-09 08:49:13.165828 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-09 08:49:13.165850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-09 08:49:16.030804 | orchestrator | 2025-04-09 08:49:16.030937 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-09 08:49:16.030976 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-09 08:49:16.642532 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-09 08:49:16.642634 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-09 08:49:16.642652 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-09 08:49:16.642667 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-09 08:49:16.642681 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-09 08:49:16.642699 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-09 08:49:16.642713 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-09 08:49:16.642727 | orchestrator | 2025-04-09 08:49:16.642743 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-09 08:49:16.642774 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:16.734592 | orchestrator | 2025-04-09 08:49:16.734686 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-09 08:49:16.734718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-09 08:49:17.937877 | orchestrator | 2025-04-09 08:49:17.937983 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-09 08:49:17.938119 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-09 08:49:18.599462 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-09 08:49:18.599600 | orchestrator | 2025-04-09 08:49:18.599617 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-09 08:49:18.599650 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:18.663864 | orchestrator | 2025-04-09 08:49:18.663895 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-09 08:49:18.663916 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:49:18.727594 | orchestrator | 2025-04-09 08:49:18.727623 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-09 08:49:18.727643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-09 08:49:20.146759 | orchestrator | 2025-04-09 08:49:20.146888 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-09 08:49:20.146929 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:49:20.845373 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:49:20.845536 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:20.845558 | orchestrator | 2025-04-09 08:49:20.845575 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-09 08:49:20.845607 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:20.944690 | orchestrator | 2025-04-09 08:49:20.944764 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-09 08:49:20.944791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-09 08:49:21.571883 | orchestrator | 2025-04-09 08:49:21.571992 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-09 08:49:21.572029 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 08:49:22.250726 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:22.250819 | orchestrator | 2025-04-09 08:49:22.250838 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-09 08:49:22.250869 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:22.381016 | orchestrator | 2025-04-09 08:49:22.381101 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-09 08:49:22.381134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-09 08:49:22.935166 | orchestrator | 2025-04-09 08:49:22.935269 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-09 08:49:22.935304 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:23.368397 | orchestrator | 2025-04-09 08:49:23.368537 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-09 08:49:23.368570 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:24.669829 | orchestrator | 2025-04-09 08:49:24.669945 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-09 08:49:24.669982 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-09 08:49:25.364012 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-09 08:49:25.364149 | orchestrator | 2025-04-09 08:49:25.364213 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-09 08:49:25.364249 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:25.804349 | orchestrator | 2025-04-09 08:49:25.804456 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-09 08:49:25.804522 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:26.163675 | orchestrator | 2025-04-09 08:49:26.163777 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-09 08:49:26.163810 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:26.209243 | orchestrator | 2025-04-09 08:49:26.209270 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-09 08:49:26.209288 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:49:26.333171 | orchestrator | 2025-04-09 08:49:26.333201 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-09 08:49:26.333234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-09 08:49:26.385533 | orchestrator | 2025-04-09 08:49:26.385567 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-09 08:49:26.385586 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:28.460702 | orchestrator | 2025-04-09 08:49:28.460805 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-09 08:49:28.460831 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-09 08:49:29.214426 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-09 08:49:29.214595 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-09 08:49:29.214615 | orchestrator | 2025-04-09 08:49:29.214630 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-09 08:49:29.214660 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:29.964789 | orchestrator | 2025-04-09 08:49:29.964908 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-09 08:49:29.964945 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:30.060347 | orchestrator | 2025-04-09 08:49:30.060411 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-09 08:49:30.060440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-09 08:49:30.112288 | orchestrator | 2025-04-09 08:49:30.112373 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-09 08:49:30.112424 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:30.827666 | orchestrator | 2025-04-09 08:49:30.827780 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-09 08:49:30.827819 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-09 08:49:30.909381 | orchestrator | 2025-04-09 08:49:30.909455 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-09 08:49:30.909517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-09 08:49:31.666974 | orchestrator | 2025-04-09 08:49:31.667091 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-09 08:49:31.667127 | orchestrator | changed: [testbed-manager] 2025-04-09 08:49:32.309836 | orchestrator | 2025-04-09 08:49:32.309939 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-09 08:49:32.309971 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:32.368428 | orchestrator | 2025-04-09 08:49:32.368508 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-09 08:49:32.368533 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:49:32.436153 | orchestrator | 2025-04-09 08:49:32.436227 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-09 08:49:32.436255 | orchestrator | ok: [testbed-manager] 2025-04-09 08:49:33.309292 | orchestrator | 2025-04-09 08:49:33.309417 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-09 08:49:33.309529 | orchestrator | changed: [testbed-manager] 2025-04-09 08:50:14.147861 | orchestrator | 2025-04-09 08:50:14.148004 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-09 08:50:14.148042 | orchestrator | changed: [testbed-manager] 2025-04-09 08:50:14.803783 | orchestrator | 2025-04-09 08:50:14.803865 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-09 08:50:14.803896 | orchestrator | ok: [testbed-manager] 2025-04-09 08:50:17.386545 | orchestrator | 2025-04-09 08:50:17.386660 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-09 08:50:17.386694 | orchestrator | changed: [testbed-manager] 2025-04-09 08:50:17.444657 | orchestrator | 2025-04-09 08:50:17.444688 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-09 08:50:17.444709 | orchestrator | ok: [testbed-manager] 2025-04-09 08:50:17.514111 | orchestrator | 2025-04-09 08:50:17.514140 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-09 08:50:17.514154 | orchestrator | 2025-04-09 08:50:17.514168 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-09 08:50:17.514188 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:51:17.574243 | orchestrator | 2025-04-09 08:51:17.574409 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-09 08:51:17.574527 | orchestrator | Pausing for 60 seconds 2025-04-09 08:51:22.007200 | orchestrator | changed: [testbed-manager] 2025-04-09 08:51:22.007328 | orchestrator | 2025-04-09 08:51:22.007349 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-09 08:51:22.007382 | orchestrator | changed: [testbed-manager] 2025-04-09 08:52:03.715244 | orchestrator | 2025-04-09 08:52:03.715383 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-09 08:52:03.715447 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-09 08:52:12.014392 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-09 08:52:12.014577 | orchestrator | changed: [testbed-manager] 2025-04-09 08:52:12.014599 | orchestrator | 2025-04-09 08:52:12.014615 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-09 08:52:12.014676 | orchestrator | changed: [testbed-manager] 2025-04-09 08:52:12.119122 | orchestrator | 2025-04-09 08:52:12.119214 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-09 08:52:12.119248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-09 08:52:12.178348 | orchestrator | 2025-04-09 08:52:12.178398 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-09 08:52:12.178456 | orchestrator | 2025-04-09 08:52:12.178472 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-09 08:52:12.178495 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:52:12.313711 | orchestrator | 2025-04-09 08:52:12.313781 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:52:12.313800 | orchestrator | testbed-manager : ok=105 changed=56 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-09 08:52:12.313815 | orchestrator | 2025-04-09 08:52:12.313842 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-09 08:52:12.321321 | orchestrator | + deactivate 2025-04-09 08:52:12.321352 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-09 08:52:12.321368 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-09 08:52:12.321382 | orchestrator | + export PATH 2025-04-09 08:52:12.321396 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-09 08:52:12.321440 | orchestrator | + '[' -n '' ']' 2025-04-09 08:52:12.321454 | orchestrator | + hash -r 2025-04-09 08:52:12.321468 | orchestrator | + '[' -n '' ']' 2025-04-09 08:52:12.321482 | orchestrator | + unset VIRTUAL_ENV 2025-04-09 08:52:12.321496 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-09 08:52:12.321510 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-09 08:52:12.321523 | orchestrator | + unset -f deactivate 2025-04-09 08:52:12.321538 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-09 08:52:12.321563 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-09 08:52:12.322456 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-09 08:52:12.322483 | orchestrator | + local max_attempts=60 2025-04-09 08:52:12.322499 | orchestrator | + local name=ceph-ansible 2025-04-09 08:52:12.322514 | orchestrator | + local attempt_num=1 2025-04-09 08:52:12.322535 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-09 08:52:12.366896 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-09 08:52:12.368488 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-09 08:52:12.368519 | orchestrator | + local max_attempts=60 2025-04-09 08:52:12.368535 | orchestrator | + local name=kolla-ansible 2025-04-09 08:52:12.368550 | orchestrator | + local attempt_num=1 2025-04-09 08:52:12.368570 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-09 08:52:12.406891 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-09 08:52:12.408205 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-09 08:52:12.408233 | orchestrator | + local max_attempts=60 2025-04-09 08:52:12.408249 | orchestrator | + local name=osism-ansible 2025-04-09 08:52:12.408264 | orchestrator | + local attempt_num=1 2025-04-09 08:52:12.408285 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-09 08:52:12.446313 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-09 08:52:13.150311 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-09 08:52:13.150475 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-09 08:52:13.150515 | orchestrator | ++ semver latest 9.0.0 2025-04-09 08:52:13.184772 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-09 08:52:13.215144 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-09 08:52:13.215219 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-04-09 08:52:13.215236 | orchestrator | + local max_attempts=60 2025-04-09 08:52:13.215251 | orchestrator | + local name=netbox-netbox-1 2025-04-09 08:52:13.215265 | orchestrator | + local attempt_num=1 2025-04-09 08:52:13.215279 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-04-09 08:52:13.215306 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-09 08:52:13.222740 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-04-09 08:52:13.222788 | orchestrator | + set -e 2025-04-09 08:52:15.028774 | orchestrator | + osism manage netbox --parallel 4 2025-04-09 08:52:15.028904 | orchestrator | 2025-04-09 08:52:15 | INFO  | It takes a moment until task 0f867def-90f0-46d6-a828-f3e4fd98f2df (netbox-manager) has been started and output is visible here. 2025-04-09 08:52:17.115149 | orchestrator | 2025-04-09 08:52:17 | INFO  | Wait for NetBox service 2025-04-09 08:52:19.265314 | orchestrator | 2025-04-09 08:52:19.266946 | orchestrator | PLAY [Wait for NetBox service] ************************************************* 2025-04-09 08:52:19.348173 | orchestrator | 2025-04-09 08:52:19.348863 | orchestrator | TASK [Wait for NetBox service REST API] **************************************** 2025-04-09 08:52:20.614514 | orchestrator | ok: [localhost] 2025-04-09 08:52:20.614624 | orchestrator | 2025-04-09 08:52:20.614648 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:52:20.615105 | orchestrator | 2025-04-09 08:52:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:52:20.615364 | orchestrator | 2025-04-09 08:52:20 | INFO  | Please wait and do not abort execution. 2025-04-09 08:52:20.615435 | orchestrator | localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:52:21.189703 | orchestrator | 2025-04-09 08:52:21 | INFO  | Manage devicetypes 2025-04-09 08:52:24.432037 | orchestrator | 2025-04-09 08:52:24 | INFO  | Manage moduletypes 2025-04-09 08:52:24.613142 | orchestrator | 2025-04-09 08:52:24 | INFO  | Manage resources 2025-04-09 08:52:24.628462 | orchestrator | 2025-04-09 08:52:24 | INFO  | Handle file /netbox/resources/100-initialise.yml 2025-04-09 08:52:25.732480 | orchestrator | IGNORE_SSL_ERRORS is True, catching exception and disabling SSL verification. 2025-04-09 08:52:25.733790 | orchestrator | Manufacturer queued for addition: Arista 2025-04-09 08:52:25.735138 | orchestrator | Manufacturer queued for addition: Other 2025-04-09 08:52:25.737335 | orchestrator | Manufacturer Created: Arista - 2 2025-04-09 08:52:25.738821 | orchestrator | Manufacturer Created: Other - 3 2025-04-09 08:52:25.739845 | orchestrator | Device Type Created: Arista - DCS-7050TX3-48C8 - 2 2025-04-09 08:52:25.740642 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 2 - 1 2025-04-09 08:52:25.741701 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 2 - 2 2025-04-09 08:52:25.742727 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 2 - 3 2025-04-09 08:52:25.743874 | orchestrator | Interface Template Created: Ethernet4 - 10GBASE-T (10GE) - 2 - 4 2025-04-09 08:52:25.744333 | orchestrator | Interface Template Created: Ethernet5 - 10GBASE-T (10GE) - 2 - 5 2025-04-09 08:52:25.745167 | orchestrator | Interface Template Created: Ethernet6 - 10GBASE-T (10GE) - 2 - 6 2025-04-09 08:52:25.746595 | orchestrator | Interface Template Created: Ethernet7 - 10GBASE-T (10GE) - 2 - 7 2025-04-09 08:52:25.748918 | orchestrator | Interface Template Created: Ethernet8 - 10GBASE-T (10GE) - 2 - 8 2025-04-09 08:52:25.750874 | orchestrator | Interface Template Created: Ethernet9 - 10GBASE-T (10GE) - 2 - 9 2025-04-09 08:52:25.752322 | orchestrator | Interface Template Created: Ethernet10 - 10GBASE-T (10GE) - 2 - 10 2025-04-09 08:52:25.753885 | orchestrator | Interface Template Created: Ethernet11 - 10GBASE-T (10GE) - 2 - 11 2025-04-09 08:52:25.755482 | orchestrator | Interface Template Created: Ethernet12 - 10GBASE-T (10GE) - 2 - 12 2025-04-09 08:52:25.756657 | orchestrator | Interface Template Created: Ethernet13 - 10GBASE-T (10GE) - 2 - 13 2025-04-09 08:52:25.759291 | orchestrator | Interface Template Created: Ethernet14 - 10GBASE-T (10GE) - 2 - 14 2025-04-09 08:52:25.761441 | orchestrator | Interface Template Created: Ethernet15 - 10GBASE-T (10GE) - 2 - 15 2025-04-09 08:52:25.762461 | orchestrator | Interface Template Created: Ethernet16 - 10GBASE-T (10GE) - 2 - 16 2025-04-09 08:52:25.763131 | orchestrator | Interface Template Created: Ethernet17 - 10GBASE-T (10GE) - 2 - 17 2025-04-09 08:52:25.764046 | orchestrator | Interface Template Created: Ethernet18 - 10GBASE-T (10GE) - 2 - 18 2025-04-09 08:52:25.764883 | orchestrator | Interface Template Created: Ethernet19 - 10GBASE-T (10GE) - 2 - 19 2025-04-09 08:52:25.765559 | orchestrator | Interface Template Created: Ethernet20 - 10GBASE-T (10GE) - 2 - 20 2025-04-09 08:52:25.766247 | orchestrator | Interface Template Created: Ethernet21 - 10GBASE-T (10GE) - 2 - 21 2025-04-09 08:52:25.767150 | orchestrator | Interface Template Created: Ethernet22 - 10GBASE-T (10GE) - 2 - 22 2025-04-09 08:52:25.767690 | orchestrator | Interface Template Created: Ethernet23 - 10GBASE-T (10GE) - 2 - 23 2025-04-09 08:52:25.768169 | orchestrator | Interface Template Created: Ethernet24 - 10GBASE-T (10GE) - 2 - 24 2025-04-09 08:52:25.768691 | orchestrator | Interface Template Created: Ethernet25 - 10GBASE-T (10GE) - 2 - 25 2025-04-09 08:52:25.769227 | orchestrator | Interface Template Created: Ethernet26 - 10GBASE-T (10GE) - 2 - 26 2025-04-09 08:52:25.769677 | orchestrator | Interface Template Created: Ethernet27 - 10GBASE-T (10GE) - 2 - 27 2025-04-09 08:52:25.770318 | orchestrator | Interface Template Created: Ethernet28 - 10GBASE-T (10GE) - 2 - 28 2025-04-09 08:52:25.770578 | orchestrator | Interface Template Created: Ethernet29 - 10GBASE-T (10GE) - 2 - 29 2025-04-09 08:52:25.771056 | orchestrator | Interface Template Created: Ethernet30 - 10GBASE-T (10GE) - 2 - 30 2025-04-09 08:52:25.771496 | orchestrator | Interface Template Created: Ethernet31 - 10GBASE-T (10GE) - 2 - 31 2025-04-09 08:52:25.771852 | orchestrator | Interface Template Created: Ethernet32 - 10GBASE-T (10GE) - 2 - 32 2025-04-09 08:52:25.772275 | orchestrator | Interface Template Created: Ethernet33 - 10GBASE-T (10GE) - 2 - 33 2025-04-09 08:52:25.772703 | orchestrator | Interface Template Created: Ethernet34 - 10GBASE-T (10GE) - 2 - 34 2025-04-09 08:52:25.773075 | orchestrator | Interface Template Created: Ethernet35 - 10GBASE-T (10GE) - 2 - 35 2025-04-09 08:52:25.773524 | orchestrator | Interface Template Created: Ethernet36 - 10GBASE-T (10GE) - 2 - 36 2025-04-09 08:52:25.773850 | orchestrator | Interface Template Created: Ethernet37 - 10GBASE-T (10GE) - 2 - 37 2025-04-09 08:52:25.774360 | orchestrator | Interface Template Created: Ethernet38 - 10GBASE-T (10GE) - 2 - 38 2025-04-09 08:52:25.775012 | orchestrator | Interface Template Created: Ethernet39 - 10GBASE-T (10GE) - 2 - 39 2025-04-09 08:52:25.775451 | orchestrator | Interface Template Created: Ethernet40 - 10GBASE-T (10GE) - 2 - 40 2025-04-09 08:52:25.776009 | orchestrator | Interface Template Created: Ethernet41 - 10GBASE-T (10GE) - 2 - 41 2025-04-09 08:52:25.776040 | orchestrator | Interface Template Created: Ethernet42 - 10GBASE-T (10GE) - 2 - 42 2025-04-09 08:52:25.776286 | orchestrator | Interface Template Created: Ethernet43 - 10GBASE-T (10GE) - 2 - 43 2025-04-09 08:52:25.776741 | orchestrator | Interface Template Created: Ethernet44 - 10GBASE-T (10GE) - 2 - 44 2025-04-09 08:52:25.777162 | orchestrator | Interface Template Created: Ethernet45 - 10GBASE-T (10GE) - 2 - 45 2025-04-09 08:52:25.777414 | orchestrator | Interface Template Created: Ethernet46 - 10GBASE-T (10GE) - 2 - 46 2025-04-09 08:52:25.777764 | orchestrator | Interface Template Created: Ethernet47 - 10GBASE-T (10GE) - 2 - 47 2025-04-09 08:52:25.778143 | orchestrator | Interface Template Created: Ethernet48 - 10GBASE-T (10GE) - 2 - 48 2025-04-09 08:52:25.778500 | orchestrator | Interface Template Created: Ethernet49/1 - QSFP28 (100GE) - 2 - 49 2025-04-09 08:52:25.778759 | orchestrator | Interface Template Created: Ethernet50/1 - QSFP28 (100GE) - 2 - 50 2025-04-09 08:52:25.779165 | orchestrator | Interface Template Created: Ethernet51/1 - QSFP28 (100GE) - 2 - 51 2025-04-09 08:52:25.779553 | orchestrator | Interface Template Created: Ethernet52/1 - QSFP28 (100GE) - 2 - 52 2025-04-09 08:52:25.779913 | orchestrator | Interface Template Created: Ethernet53/1 - QSFP28 (100GE) - 2 - 53 2025-04-09 08:52:25.782212 | orchestrator | Interface Template Created: Ethernet54/1 - QSFP28 (100GE) - 2 - 54 2025-04-09 08:52:25.782442 | orchestrator | Interface Template Created: Ethernet55/1 - QSFP28 (100GE) - 2 - 55 2025-04-09 08:52:25.782468 | orchestrator | Interface Template Created: Ethernet56/1 - QSFP28 (100GE) - 2 - 56 2025-04-09 08:52:25.782484 | orchestrator | Interface Template Created: Management1 - 1000BASE-T (1GE) - 2 - 57 2025-04-09 08:52:25.782500 | orchestrator | Power Port Template Created: PS1 - C14 - 2 - 1 2025-04-09 08:52:25.782515 | orchestrator | Power Port Template Created: PS2 - C14 - 2 - 2 2025-04-09 08:52:25.782531 | orchestrator | Console Port Template Created: Console - RJ-45 - 2 - 1 2025-04-09 08:52:25.782546 | orchestrator | Device Type Created: Other - Baremetal-Device - 3 2025-04-09 08:52:25.782561 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 3 - 58 2025-04-09 08:52:25.782576 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 3 - 59 2025-04-09 08:52:25.782596 | orchestrator | Power Port Template Created: PS1 - C14 - 3 - 3 2025-04-09 08:52:25.782882 | orchestrator | Device Type Created: Other - Manager - 4 2025-04-09 08:52:25.783208 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 4 - 60 2025-04-09 08:52:25.783237 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 4 - 61 2025-04-09 08:52:25.783533 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 4 - 62 2025-04-09 08:52:25.783694 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 4 - 63 2025-04-09 08:52:25.786604 | orchestrator | Power Port Template Created: PS1 - C14 - 4 - 4 2025-04-09 08:52:25.786710 | orchestrator | Device Type Created: Other - Node - 5 2025-04-09 08:52:25.786795 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 5 - 64 2025-04-09 08:52:25.786813 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 5 - 65 2025-04-09 08:52:25.786829 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 5 - 66 2025-04-09 08:52:25.786843 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 5 - 67 2025-04-09 08:52:25.786858 | orchestrator | Power Port Template Created: PS1 - C14 - 5 - 5 2025-04-09 08:52:25.786874 | orchestrator | Device Type Created: Other - Baremetal-Housing - 6 2025-04-09 08:52:25.786889 | orchestrator | Interface Template Created: Ethernet0 - 1000BASE-T (1GE) - 6 - 68 2025-04-09 08:52:25.786909 | orchestrator | Interface Template Created: Ethernet1 - 10GBASE-T (10GE) - 6 - 69 2025-04-09 08:52:25.787097 | orchestrator | Interface Template Created: Ethernet2 - 10GBASE-T (10GE) - 6 - 70 2025-04-09 08:52:25.787127 | orchestrator | Interface Template Created: Ethernet3 - 10GBASE-T (10GE) - 6 - 71 2025-04-09 08:52:25.787390 | orchestrator | Power Port Template Created: PS1 - C14 - 6 - 6 2025-04-09 08:52:25.787633 | orchestrator | Manufacturer queued for addition: .gitkeep 2025-04-09 08:52:25.787780 | orchestrator | Manufacturer Created: .gitkeep - 4 2025-04-09 08:52:25.788041 | orchestrator | 2025-04-09 08:52:25.788337 | orchestrator | PLAY [Manage NetBox resources defined in 100-initialise.yml] ******************* 2025-04-09 08:52:25.788506 | orchestrator | 2025-04-09 08:52:25.789355 | orchestrator | TASK [Manage NetBox resource Discworld of type site] *************************** 2025-04-09 08:52:27.043843 | orchestrator | changed: [localhost] 2025-04-09 08:52:27.044109 | orchestrator | 2025-04-09 08:52:27.044519 | orchestrator | TASK [Manage NetBox resource Ankh-Morpork of type location] ******************** 2025-04-09 08:52:28.443090 | orchestrator | changed: [localhost] 2025-04-09 08:52:28.446687 | orchestrator | 2025-04-09 08:52:28.447873 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-04-09 08:52:29.737831 | orchestrator | changed: [localhost] 2025-04-09 08:52:29.738693 | orchestrator | 2025-04-09 08:52:30.921899 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-04-09 08:52:30.922088 | orchestrator | changed: [localhost] 2025-04-09 08:52:30.925169 | orchestrator | 2025-04-09 08:52:30.925800 | orchestrator | TASK [Manage NetBox resource of type prefix] *********************************** 2025-04-09 08:52:31.986483 | orchestrator | changed: [localhost] 2025-04-09 08:52:31.986845 | orchestrator | 2025-04-09 08:52:31.986890 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:52:33.231186 | orchestrator | changed: [localhost] 2025-04-09 08:52:33.231693 | orchestrator | 2025-04-09 08:52:33.234181 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:52:34.234495 | orchestrator | changed: [localhost] 2025-04-09 08:52:34.235242 | orchestrator | 2025-04-09 08:52:34.235705 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:52:34.236320 | orchestrator | 2025-04-09 08:52:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:52:34.236608 | orchestrator | 2025-04-09 08:52:34 | INFO  | Please wait and do not abort execution. 2025-04-09 08:52:34.238249 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:52:34.470202 | orchestrator | 2025-04-09 08:52:34 | INFO  | Handle file /netbox/resources/200-rack-1000.yml 2025-04-09 08:52:35.572281 | orchestrator | 2025-04-09 08:52:35.574675 | orchestrator | PLAY [Manage NetBox resources defined in 200-rack-1000.yml] ******************** 2025-04-09 08:52:35.628796 | orchestrator | 2025-04-09 08:52:35.629239 | orchestrator | TASK [Manage NetBox resource 1000 of type rack] ******************************** 2025-04-09 08:52:37.082375 | orchestrator | changed: [localhost] 2025-04-09 08:52:37.086932 | orchestrator | 2025-04-09 08:52:37.087429 | orchestrator | TASK [Manage NetBox resource testbed-switch-0 of type device] ****************** 2025-04-09 08:52:43.869165 | orchestrator | changed: [localhost] 2025-04-09 08:52:43.871350 | orchestrator | 2025-04-09 08:52:43.872253 | orchestrator | TASK [Manage NetBox resource testbed-switch-1 of type device] ****************** 2025-04-09 08:52:49.760774 | orchestrator | changed: [localhost] 2025-04-09 08:52:49.761348 | orchestrator | 2025-04-09 08:52:49.762220 | orchestrator | TASK [Manage NetBox resource testbed-switch-2 of type device] ****************** 2025-04-09 08:52:55.636795 | orchestrator | changed: [localhost] 2025-04-09 08:52:55.641007 | orchestrator | 2025-04-09 08:52:55.641602 | orchestrator | TASK [Manage NetBox resource testbed-switch-oob of type device] **************** 2025-04-09 08:53:02.053154 | orchestrator | changed: [localhost] 2025-04-09 08:53:02.053937 | orchestrator | 2025-04-09 08:53:02.054188 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-04-09 08:53:04.375215 | orchestrator | changed: [localhost] 2025-04-09 08:53:04.379271 | orchestrator | 2025-04-09 08:53:06.634791 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-04-09 08:53:06.634909 | orchestrator | changed: [localhost] 2025-04-09 08:53:06.635485 | orchestrator | 2025-04-09 08:53:06.636129 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-04-09 08:53:09.449846 | orchestrator | changed: [localhost] 2025-04-09 08:53:09.453478 | orchestrator | 2025-04-09 08:53:09.453925 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-04-09 08:53:11.792474 | orchestrator | changed: [localhost] 2025-04-09 08:53:14.320606 | orchestrator | 2025-04-09 08:53:14.320714 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-04-09 08:53:14.320744 | orchestrator | changed: [localhost] 2025-04-09 08:53:14.325633 | orchestrator | 2025-04-09 08:53:16.484997 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-04-09 08:53:16.485114 | orchestrator | changed: [localhost] 2025-04-09 08:53:16.488933 | orchestrator | 2025-04-09 08:53:16.489879 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-04-09 08:53:18.649846 | orchestrator | changed: [localhost] 2025-04-09 08:53:18.650552 | orchestrator | 2025-04-09 08:53:18.650800 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-04-09 08:53:20.857790 | orchestrator | changed: [localhost] 2025-04-09 08:53:20.859592 | orchestrator | 2025-04-09 08:53:20.860988 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-04-09 08:53:23.057671 | orchestrator | changed: [localhost] 2025-04-09 08:53:23.060227 | orchestrator | 2025-04-09 08:53:23.061583 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-04-09 08:53:25.260963 | orchestrator | changed: [localhost] 2025-04-09 08:53:25.264498 | orchestrator | 2025-04-09 08:53:28.129097 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-04-09 08:53:28.129420 | orchestrator | changed: [localhost] 2025-04-09 08:53:28.129462 | orchestrator | 2025-04-09 08:53:28.130240 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:53:28.130479 | orchestrator | 2025-04-09 08:53:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:53:28.130513 | orchestrator | 2025-04-09 08:53:28 | INFO  | Please wait and do not abort execution. 2025-04-09 08:53:28.132046 | orchestrator | localhost : ok=16 changed=16 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:53:28.369461 | orchestrator | 2025-04-09 08:53:28 | INFO  | Handle file /netbox/resources/300-testbed-switch-0.yml 2025-04-09 08:53:28.370938 | orchestrator | 2025-04-09 08:53:28 | INFO  | Handle file /netbox/resources/300-testbed-node-9.yml 2025-04-09 08:53:28.390503 | orchestrator | 2025-04-09 08:53:28 | INFO  | Handle file /netbox/resources/300-testbed-node-1.yml 2025-04-09 08:53:28.404819 | orchestrator | 2025-04-09 08:53:28 | INFO  | Handle file /netbox/resources/300-testbed-node-3.yml 2025-04-09 08:53:29.572430 | orchestrator | 2025-04-09 08:53:29.573579 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-3.yml] *************** 2025-04-09 08:53:29.573904 | orchestrator | 2025-04-09 08:53:29.574405 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-9.yml] *************** 2025-04-09 08:53:29.620899 | orchestrator | 2025-04-09 08:53:29.621496 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:29.622295 | orchestrator | 2025-04-09 08:53:29.622566 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:29.630739 | orchestrator | 2025-04-09 08:53:29.631401 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-0.yml] ************* 2025-04-09 08:53:29.638869 | orchestrator | 2025-04-09 08:53:29.639075 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-1.yml] *************** 2025-04-09 08:53:29.692325 | orchestrator | 2025-04-09 08:53:29.692725 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:29.694456 | orchestrator | 2025-04-09 08:53:29.696430 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:32.878587 | orchestrator | changed: [localhost] 2025-04-09 08:53:32.887590 | orchestrator | 2025-04-09 08:53:33.178594 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:33.178704 | orchestrator | changed: [localhost] 2025-04-09 08:53:33.186864 | orchestrator | 2025-04-09 08:53:33.187171 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:33.606549 | orchestrator | changed: [localhost] 2025-04-09 08:53:33.611302 | orchestrator | 2025-04-09 08:53:33.612512 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:35.275936 | orchestrator | changed: [localhost] 2025-04-09 08:53:35.276143 | orchestrator | 2025-04-09 08:53:35.276178 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:35.618872 | orchestrator | changed: [localhost] 2025-04-09 08:53:35.627872 | orchestrator | 2025-04-09 08:53:35.628217 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:35.823812 | orchestrator | changed: [localhost] 2025-04-09 08:53:35.823966 | orchestrator | 2025-04-09 08:53:35.824454 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:53:35.824726 | orchestrator | 2025-04-09 08:53:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:53:35.832561 | orchestrator | 2025-04-09 08:53:35 | INFO  | Please wait and do not abort execution. 2025-04-09 08:53:35.832794 | orchestrator | localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:53:36.061526 | orchestrator | 2025-04-09 08:53:36 | INFO  | Handle file /netbox/resources/300-testbed-node-6.yml 2025-04-09 08:53:37.041292 | orchestrator | 2025-04-09 08:53:37.043268 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-6.yml] *************** 2025-04-09 08:53:37.085346 | orchestrator | 2025-04-09 08:53:37.085575 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:37.399782 | orchestrator | changed: [localhost] 2025-04-09 08:53:38.971167 | orchestrator | 2025-04-09 08:53:38.971283 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:38.971317 | orchestrator | changed: [localhost] 2025-04-09 08:53:39.129179 | orchestrator | 2025-04-09 08:53:39.129261 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:39.129290 | orchestrator | changed: [localhost] 2025-04-09 08:53:39.133508 | orchestrator | 2025-04-09 08:53:40.504635 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:40.504749 | orchestrator | changed: [localhost] 2025-04-09 08:53:40.505308 | orchestrator | 2025-04-09 08:53:40.505752 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:40.552765 | orchestrator | changed: [localhost] 2025-04-09 08:53:40.560327 | orchestrator | 2025-04-09 08:53:40.832236 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:40.832304 | orchestrator | changed: [localhost] 2025-04-09 08:53:40.836553 | orchestrator | 2025-04-09 08:53:41.836191 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:41.836319 | orchestrator | changed: [localhost] 2025-04-09 08:53:41.837115 | orchestrator | 2025-04-09 08:53:42.079163 | orchestrator | TASK [Manage NetBox resource testbed-node-3 of type device] ******************** 2025-04-09 08:53:42.079249 | orchestrator | changed: [localhost] 2025-04-09 08:53:42.083350 | orchestrator | 2025-04-09 08:53:42.083848 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:42.531102 | orchestrator | changed: [localhost] 2025-04-09 08:53:42.533213 | orchestrator | 2025-04-09 08:53:42.533357 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:42.538452 | orchestrator | changed: [localhost] 2025-04-09 08:53:42.538744 | orchestrator | 2025-04-09 08:53:43.383915 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:43.384033 | orchestrator | changed: [localhost] 2025-04-09 08:53:43.387068 | orchestrator | 2025-04-09 08:53:43.387276 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:53:43.387447 | orchestrator | 2025-04-09 08:53:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:53:43.387679 | orchestrator | 2025-04-09 08:53:43 | INFO  | Please wait and do not abort execution. 2025-04-09 08:53:43.388078 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:53:43.583746 | orchestrator | 2025-04-09 08:53:43 | INFO  | Handle file /netbox/resources/300-testbed-switch-2.yml 2025-04-09 08:53:43.901915 | orchestrator | changed: [localhost] 2025-04-09 08:53:44.415154 | orchestrator | 2025-04-09 08:53:44.415259 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:44.415292 | orchestrator | changed: [localhost] 2025-04-09 08:53:44.417613 | orchestrator | 2025-04-09 08:53:44.591846 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:44.591889 | orchestrator | 2025-04-09 08:53:44.636501 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-2.yml] ************* 2025-04-09 08:53:44.636539 | orchestrator | 2025-04-09 08:53:44.770452 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:44.770512 | orchestrator | changed: [localhost] 2025-04-09 08:53:44.771326 | orchestrator | 2025-04-09 08:53:44.771553 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:45.682270 | orchestrator | changed: [localhost] 2025-04-09 08:53:45.683630 | orchestrator | 2025-04-09 08:53:45.683975 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:45.878706 | orchestrator | changed: [localhost] 2025-04-09 08:53:45.885126 | orchestrator | 2025-04-09 08:53:45.885543 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:46.300785 | orchestrator | changed: [localhost] 2025-04-09 08:53:46.301097 | orchestrator | 2025-04-09 08:53:46.302147 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:47.102234 | orchestrator | changed: [localhost] 2025-04-09 08:53:47.103121 | orchestrator | 2025-04-09 08:53:47.276462 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:47.276504 | orchestrator | changed: [localhost] 2025-04-09 08:53:47.280840 | orchestrator | 2025-04-09 08:53:47.281857 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:53:47.357741 | orchestrator | changed: [localhost] 2025-04-09 08:53:47.358224 | orchestrator | 2025-04-09 08:53:47.360405 | orchestrator | TASK [Manage NetBox resource testbed-node-1 of type device] ******************** 2025-04-09 08:53:47.672145 | orchestrator | changed: [localhost] 2025-04-09 08:53:47.674155 | orchestrator | 2025-04-09 08:53:47.674729 | orchestrator | TASK [Manage NetBox resource testbed-node-6 of type device] ******************** 2025-04-09 08:53:48.815218 | orchestrator | changed: [localhost] 2025-04-09 08:53:48.828071 | orchestrator | 2025-04-09 08:53:48.828201 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:49.229190 | orchestrator | changed: [localhost] 2025-04-09 08:53:49.232776 | orchestrator | 2025-04-09 08:53:49.233918 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:53:49.234172 | orchestrator | 2025-04-09 08:53:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:53:49.234517 | orchestrator | 2025-04-09 08:53:49 | INFO  | Please wait and do not abort execution. 2025-04-09 08:53:49.235759 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:53:49.466067 | orchestrator | changed: [localhost] 2025-04-09 08:53:49.474065 | orchestrator | 2025-04-09 08:53:49.475629 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:53:49.476314 | orchestrator | 2025-04-09 08:53:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:53:49.476331 | orchestrator | 2025-04-09 08:53:49 | INFO  | Please wait and do not abort execution. 2025-04-09 08:53:49.476342 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:53:49.494885 | orchestrator | 2025-04-09 08:53:49 | INFO  | Handle file /netbox/resources/300-testbed-node-5.yml 2025-04-09 08:53:49.551984 | orchestrator | changed: [localhost] 2025-04-09 08:53:49.553575 | orchestrator | 2025-04-09 08:53:49.553901 | orchestrator | TASK [Manage NetBox resource testbed-node-9 of type device] ******************** 2025-04-09 08:53:49.710920 | orchestrator | 2025-04-09 08:53:49 | INFO  | Handle file /netbox/resources/300-testbed-node-8.yml 2025-04-09 08:53:50.645899 | orchestrator | 2025-04-09 08:53:50.647969 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-5.yml] *************** 2025-04-09 08:53:50.736437 | orchestrator | 2025-04-09 08:53:51.091692 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:51.091808 | orchestrator | 2025-04-09 08:53:51.092330 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-8.yml] *************** 2025-04-09 08:53:51.149339 | orchestrator | changed: [localhost] 2025-04-09 08:53:51.155986 | orchestrator | 2025-04-09 08:53:51.156329 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:51.156808 | orchestrator | 2025-04-09 08:53:51.157090 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:53:51.157501 | orchestrator | 2025-04-09 08:53:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:53:51.157580 | orchestrator | 2025-04-09 08:53:51 | INFO  | Please wait and do not abort execution. 2025-04-09 08:53:51.157996 | orchestrator | localhost : ok=3 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:53:51.425260 | orchestrator | 2025-04-09 08:53:51 | INFO  | Handle file /netbox/resources/300-testbed-node-0.yml 2025-04-09 08:53:51.639550 | orchestrator | changed: [localhost] 2025-04-09 08:53:51.639959 | orchestrator | 2025-04-09 08:53:51.640291 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:53:51.642211 | orchestrator | 2025-04-09 08:53:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:53:51.648455 | orchestrator | 2025-04-09 08:53:51 | INFO  | Please wait and do not abort execution. 2025-04-09 08:53:51.648490 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:53:52.010189 | orchestrator | 2025-04-09 08:53:52 | INFO  | Handle file /netbox/resources/300-testbed-manager.yml 2025-04-09 08:53:52.710645 | orchestrator | 2025-04-09 08:53:52.713527 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-0.yml] *************** 2025-04-09 08:53:52.770691 | orchestrator | 2025-04-09 08:53:52.772071 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:53.150798 | orchestrator | 2025-04-09 08:53:53.151884 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-manager.yml] ************** 2025-04-09 08:53:53.224685 | orchestrator | 2025-04-09 08:53:54.324896 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:54.325756 | orchestrator | changed: [localhost] 2025-04-09 08:53:54.349250 | orchestrator | 2025-04-09 08:53:54.349329 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:54.349355 | orchestrator | changed: [localhost] 2025-04-09 08:53:54.351761 | orchestrator | 2025-04-09 08:53:54.352775 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:55.275545 | orchestrator | changed: [localhost] 2025-04-09 08:53:55.282504 | orchestrator | 2025-04-09 08:53:55.283480 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:55.811805 | orchestrator | changed: [localhost] 2025-04-09 08:53:55.813009 | orchestrator | 2025-04-09 08:53:55.813530 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:56.472295 | orchestrator | changed: [localhost] 2025-04-09 08:53:56.478545 | orchestrator | 2025-04-09 08:53:56.479325 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:56.583684 | orchestrator | changed: [localhost] 2025-04-09 08:53:56.586742 | orchestrator | 2025-04-09 08:53:57.852455 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:57.852578 | orchestrator | changed: [localhost] 2025-04-09 08:53:57.853281 | orchestrator | 2025-04-09 08:53:57.853810 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:58.069082 | orchestrator | changed: [localhost] 2025-04-09 08:53:58.074586 | orchestrator | 2025-04-09 08:53:58.075334 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:58.576343 | orchestrator | changed: [localhost] 2025-04-09 08:53:58.579300 | orchestrator | 2025-04-09 08:53:58.583147 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:59.048746 | orchestrator | changed: [localhost] 2025-04-09 08:53:59.051462 | orchestrator | 2025-04-09 08:53:59.051502 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:53:59.687734 | orchestrator | changed: [localhost] 2025-04-09 08:53:59.692248 | orchestrator | 2025-04-09 08:53:59.692387 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:00.138437 | orchestrator | changed: [localhost] 2025-04-09 08:54:00.138632 | orchestrator | 2025-04-09 08:54:00.138997 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:00.325485 | orchestrator | changed: [localhost] 2025-04-09 08:54:00.333576 | orchestrator | 2025-04-09 08:54:00.788830 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:00.788939 | orchestrator | changed: [localhost] 2025-04-09 08:54:00.793313 | orchestrator | 2025-04-09 08:54:00.794901 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:01.542819 | orchestrator | changed: [localhost] 2025-04-09 08:54:01.543330 | orchestrator | 2025-04-09 08:54:01.823916 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:01.823957 | orchestrator | changed: [localhost] 2025-04-09 08:54:01.827787 | orchestrator | 2025-04-09 08:54:01.828062 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:01.845955 | orchestrator | changed: [localhost] 2025-04-09 08:54:01.848906 | orchestrator | 2025-04-09 08:54:01.848996 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:02.216033 | orchestrator | changed: [localhost] 2025-04-09 08:54:02.219355 | orchestrator | 2025-04-09 08:54:02.219658 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:02.952258 | orchestrator | changed: [localhost] 2025-04-09 08:54:02.954724 | orchestrator | 2025-04-09 08:54:02.957029 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:03.114192 | orchestrator | changed: [localhost] 2025-04-09 08:54:03.119237 | orchestrator | 2025-04-09 08:54:03.328307 | orchestrator | TASK [Manage NetBox resource testbed-node-8 of type device] ******************** 2025-04-09 08:54:03.328357 | orchestrator | changed: [localhost] 2025-04-09 08:54:03.329114 | orchestrator | 2025-04-09 08:54:03.329616 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:03.469632 | orchestrator | changed: [localhost] 2025-04-09 08:54:03.475328 | orchestrator | 2025-04-09 08:54:03.476379 | orchestrator | TASK [Manage NetBox resource testbed-node-5 of type device] ******************** 2025-04-09 08:54:04.475317 | orchestrator | changed: [localhost] 2025-04-09 08:54:04.475876 | orchestrator | 2025-04-09 08:54:04.476028 | orchestrator | TASK [Manage NetBox resource testbed-node-0 of type device] ******************** 2025-04-09 08:54:04.651970 | orchestrator | changed: [localhost] 2025-04-09 08:54:04.659621 | orchestrator | 2025-04-09 08:54:04.954660 | orchestrator | TASK [Manage NetBox resource testbed-manager of type device] ******************* 2025-04-09 08:54:04.954786 | orchestrator | changed: [localhost] 2025-04-09 08:54:04.954849 | orchestrator | 2025-04-09 08:54:04.954870 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:04.955143 | orchestrator | 2025-04-09 08:54:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:04.958407 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:05.170856 | orchestrator | 2025-04-09 08:54:04 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:05.170964 | orchestrator | changed: [localhost] 2025-04-09 08:54:05.175870 | orchestrator | 2025-04-09 08:54:05.180500 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:05.180944 | orchestrator | 2025-04-09 08:54:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:05.181146 | orchestrator | 2025-04-09 08:54:05 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:05.181175 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:05.221891 | orchestrator | 2025-04-09 08:54:05 | INFO  | Handle file /netbox/resources/300-testbed-node-4.yml 2025-04-09 08:54:05.451559 | orchestrator | 2025-04-09 08:54:05 | INFO  | Handle file /netbox/resources/300-testbed-node-7.yml 2025-04-09 08:54:06.180575 | orchestrator | changed: [localhost] 2025-04-09 08:54:06.185264 | orchestrator | 2025-04-09 08:54:06.185403 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:06.185426 | orchestrator | 2025-04-09 08:54:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:06.185446 | orchestrator | 2025-04-09 08:54:06 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:06.186797 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:06.285969 | orchestrator | 2025-04-09 08:54:06.333107 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-4.yml] *************** 2025-04-09 08:54:06.333150 | orchestrator | 2025-04-09 08:54:06.333389 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:06.364391 | orchestrator | changed: [localhost] 2025-04-09 08:54:06.368105 | orchestrator | 2025-04-09 08:54:06.368450 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:06.368718 | orchestrator | 2025-04-09 08:54:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:06.369014 | orchestrator | 2025-04-09 08:54:06 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:06.369814 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:06.424789 | orchestrator | 2025-04-09 08:54:06 | INFO  | Handle file /netbox/resources/300-testbed-node-2.yml 2025-04-09 08:54:06.548936 | orchestrator | 2025-04-09 08:54:06.602176 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-7.yml] *************** 2025-04-09 08:54:06.602218 | orchestrator | 2025-04-09 08:54:06.602711 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:06.621204 | orchestrator | 2025-04-09 08:54:06 | INFO  | Handle file /netbox/resources/300-testbed-switch-1.yml 2025-04-09 08:54:07.534752 | orchestrator | 2025-04-09 08:54:07.581758 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-node-2.yml] *************** 2025-04-09 08:54:07.581803 | orchestrator | 2025-04-09 08:54:07.582978 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:07.789349 | orchestrator | 2025-04-09 08:54:07.844092 | orchestrator | PLAY [Manage NetBox resources defined in 300-testbed-switch-1.yml] ************* 2025-04-09 08:54:07.844148 | orchestrator | 2025-04-09 08:54:07.845093 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:08.593979 | orchestrator | changed: [localhost] 2025-04-09 08:54:08.596509 | orchestrator | 2025-04-09 08:54:08.596962 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:09.390124 | orchestrator | changed: [localhost] 2025-04-09 08:54:09.392429 | orchestrator | 2025-04-09 08:54:09.793793 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:09.793899 | orchestrator | changed: [localhost] 2025-04-09 08:54:09.802422 | orchestrator | 2025-04-09 08:54:10.177594 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:10.177743 | orchestrator | changed: [localhost] 2025-04-09 08:54:10.183035 | orchestrator | 2025-04-09 08:54:10.184090 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:10.760791 | orchestrator | changed: [localhost] 2025-04-09 08:54:10.769411 | orchestrator | 2025-04-09 08:54:10.774528 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:11.840093 | orchestrator | changed: [localhost] 2025-04-09 08:54:11.840316 | orchestrator | 2025-04-09 08:54:11.840926 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:11.985347 | orchestrator | 2025-04-09 08:54:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:11.985480 | orchestrator | 2025-04-09 08:54:11 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:11.985499 | orchestrator | localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:11.985528 | orchestrator | changed: [localhost] 2025-04-09 08:54:11.991579 | orchestrator | 2025-04-09 08:54:11.993045 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:12.502512 | orchestrator | changed: [localhost] 2025-04-09 08:54:12.505286 | orchestrator | 2025-04-09 08:54:12.506070 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:12.832820 | orchestrator | changed: [localhost] 2025-04-09 08:54:12.833899 | orchestrator | 2025-04-09 08:54:12.833934 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:14.049778 | orchestrator | changed: [localhost] 2025-04-09 08:54:14.053742 | orchestrator | 2025-04-09 08:54:14.054119 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:14.511632 | orchestrator | changed: [localhost] 2025-04-09 08:54:14.519215 | orchestrator | 2025-04-09 08:54:14.519609 | orchestrator | TASK [Manage NetBox resource of type cable] ************************************ 2025-04-09 08:54:14.939842 | orchestrator | changed: [localhost] 2025-04-09 08:54:14.944250 | orchestrator | 2025-04-09 08:54:14.944901 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:16.543169 | orchestrator | changed: [localhost] 2025-04-09 08:54:16.546702 | orchestrator | 2025-04-09 08:54:16.547427 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:16.556989 | orchestrator | changed: [localhost] 2025-04-09 08:54:16.561256 | orchestrator | 2025-04-09 08:54:16.561775 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:16.571088 | orchestrator | changed: [localhost] 2025-04-09 08:54:16.572333 | orchestrator | 2025-04-09 08:54:16.573425 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:18.139941 | orchestrator | changed: [localhost] 2025-04-09 08:54:18.142801 | orchestrator | 2025-04-09 08:54:18.143913 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:18.230852 | orchestrator | changed: [localhost] 2025-04-09 08:54:18.232275 | orchestrator | 2025-04-09 08:54:18.232307 | orchestrator | TASK [Manage NetBox resource of type ip_address] ******************************* 2025-04-09 08:54:18.381857 | orchestrator | changed: [localhost] 2025-04-09 08:54:18.386310 | orchestrator | 2025-04-09 08:54:18.386765 | orchestrator | TASK [Manage NetBox resource testbed-node-4 of type device] ******************** 2025-04-09 08:54:19.624704 | orchestrator | changed: [localhost] 2025-04-09 08:54:19.627803 | orchestrator | 2025-04-09 08:54:19.628960 | orchestrator | TASK [Manage NetBox resource testbed-node-7 of type device] ******************** 2025-04-09 08:54:20.076695 | orchestrator | changed: [localhost] 2025-04-09 08:54:20.080885 | orchestrator | 2025-04-09 08:54:20.153191 | orchestrator | TASK [Manage NetBox resource testbed-node-2 of type device] ******************** 2025-04-09 08:54:20.153233 | orchestrator | changed: [localhost] 2025-04-09 08:54:20.154495 | orchestrator | 2025-04-09 08:54:20.154742 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:20.155311 | orchestrator | 2025-04-09 08:54:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:20.155442 | orchestrator | 2025-04-09 08:54:20 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:20.156217 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:21.419945 | orchestrator | changed: [localhost] 2025-04-09 08:54:21.422417 | orchestrator | 2025-04-09 08:54:21.422464 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:21.423225 | orchestrator | 2025-04-09 08:54:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:21.423309 | orchestrator | 2025-04-09 08:54:21 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:21.423497 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:21.919128 | orchestrator | changed: [localhost] 2025-04-09 08:54:21.919316 | orchestrator | 2025-04-09 08:54:21.919782 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:21.920104 | orchestrator | 2025-04-09 08:54:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:21.920204 | orchestrator | 2025-04-09 08:54:21 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:21.920915 | orchestrator | localhost : ok=7 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:54:22.142591 | orchestrator | 2025-04-09 08:54:22 | INFO  | Runtime: 125.0301s 2025-04-09 08:54:22.539666 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-09 08:54:22.766119 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-09 08:54:22.772347 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 4 minutes ago Up 3 minutes (healthy) 2025-04-09 08:54:22.772446 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 4 minutes ago Up 3 minutes (healthy) 2025-04-09 08:54:22.772463 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 4 minutes ago Up 4 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-04-09 08:54:22.772507 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 4 minutes ago Up 3 minutes (healthy) 8000/tcp 2025-04-09 08:54:22.772533 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772548 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772562 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772576 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 4 minutes ago Up 3 minutes (healthy) 2025-04-09 08:54:22.772590 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772604 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb 4 minutes ago Up 4 minutes (healthy) 3306/tcp 2025-04-09 08:54:22.772618 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772632 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772649 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 4 minutes ago Up 4 minutes (healthy) 6379/tcp 2025-04-09 08:54:22.772663 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772677 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 4 minutes ago Up 3 minutes (healthy) 2025-04-09 08:54:22.772691 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 4 minutes ago Up 3 minutes (healthy) 2025-04-09 08:54:22.772705 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 4 minutes ago Up 4 minutes (healthy) 2025-04-09 08:54:22.772727 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-09 08:54:22.922226 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-09 08:54:22.930518 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 10 minutes ago Up 9 minutes (healthy) 2025-04-09 08:54:22.930567 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 10 minutes ago Up 5 minutes (healthy) 2025-04-09 08:54:22.930582 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.8-alpine "docker-entrypoint.s…" postgres 10 minutes ago Up 10 minutes (healthy) 5432/tcp 2025-04-09 08:54:22.930598 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 10 minutes ago Up 10 minutes (healthy) 6379/tcp 2025-04-09 08:54:22.930642 | orchestrator | ++ semver latest 7.0.0 2025-04-09 08:54:22.985133 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-09 08:54:22.990059 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-09 08:54:22.990095 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-09 08:54:22.990117 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-09 08:54:24.610847 | orchestrator | 2025-04-09 08:54:24 | INFO  | Task 906333e9-ba1e-4066-8771-2c5ab5daa108 (resolvconf) was prepared for execution. 2025-04-09 08:54:28.468915 | orchestrator | 2025-04-09 08:54:24 | INFO  | It takes a moment until task 906333e9-ba1e-4066-8771-2c5ab5daa108 (resolvconf) has been started and output is visible here. 2025-04-09 08:54:28.469061 | orchestrator | 2025-04-09 08:54:28.469830 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-09 08:54:28.470123 | orchestrator | 2025-04-09 08:54:28.471190 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:54:28.472970 | orchestrator | Wednesday 09 April 2025 08:54:28 +0000 (0:00:00.147) 0:00:00.147 ******* 2025-04-09 08:54:32.071545 | orchestrator | ok: [testbed-manager] 2025-04-09 08:54:32.072448 | orchestrator | 2025-04-09 08:54:32.072498 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-09 08:54:32.073533 | orchestrator | Wednesday 09 April 2025 08:54:32 +0000 (0:00:03.606) 0:00:03.753 ******* 2025-04-09 08:54:32.143061 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:54:32.143411 | orchestrator | 2025-04-09 08:54:32.143846 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-09 08:54:32.144291 | orchestrator | Wednesday 09 April 2025 08:54:32 +0000 (0:00:00.072) 0:00:03.825 ******* 2025-04-09 08:54:32.235606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-09 08:54:32.236316 | orchestrator | 2025-04-09 08:54:32.236380 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-09 08:54:32.237325 | orchestrator | Wednesday 09 April 2025 08:54:32 +0000 (0:00:00.091) 0:00:03.916 ******* 2025-04-09 08:54:32.323661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-09 08:54:32.324452 | orchestrator | 2025-04-09 08:54:32.325919 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-09 08:54:32.326592 | orchestrator | Wednesday 09 April 2025 08:54:32 +0000 (0:00:00.087) 0:00:04.004 ******* 2025-04-09 08:54:33.403389 | orchestrator | ok: [testbed-manager] 2025-04-09 08:54:33.403632 | orchestrator | 2025-04-09 08:54:33.404672 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-09 08:54:33.405771 | orchestrator | Wednesday 09 April 2025 08:54:33 +0000 (0:00:01.078) 0:00:05.082 ******* 2025-04-09 08:54:33.473953 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:54:33.475128 | orchestrator | 2025-04-09 08:54:33.475509 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-09 08:54:33.476849 | orchestrator | Wednesday 09 April 2025 08:54:33 +0000 (0:00:00.072) 0:00:05.155 ******* 2025-04-09 08:54:33.953457 | orchestrator | ok: [testbed-manager] 2025-04-09 08:54:33.954942 | orchestrator | 2025-04-09 08:54:33.955967 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-09 08:54:33.957562 | orchestrator | Wednesday 09 April 2025 08:54:33 +0000 (0:00:00.478) 0:00:05.634 ******* 2025-04-09 08:54:34.041269 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:54:34.042679 | orchestrator | 2025-04-09 08:54:34.042718 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-09 08:54:34.043034 | orchestrator | Wednesday 09 April 2025 08:54:34 +0000 (0:00:00.085) 0:00:05.720 ******* 2025-04-09 08:54:34.582344 | orchestrator | changed: [testbed-manager] 2025-04-09 08:54:34.583306 | orchestrator | 2025-04-09 08:54:34.584438 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-09 08:54:34.585109 | orchestrator | Wednesday 09 April 2025 08:54:34 +0000 (0:00:00.543) 0:00:06.263 ******* 2025-04-09 08:54:35.709669 | orchestrator | changed: [testbed-manager] 2025-04-09 08:54:35.709846 | orchestrator | 2025-04-09 08:54:35.711259 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-09 08:54:35.711695 | orchestrator | Wednesday 09 April 2025 08:54:35 +0000 (0:00:01.125) 0:00:07.389 ******* 2025-04-09 08:54:36.661872 | orchestrator | ok: [testbed-manager] 2025-04-09 08:54:36.663804 | orchestrator | 2025-04-09 08:54:36.664681 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-09 08:54:36.664731 | orchestrator | Wednesday 09 April 2025 08:54:36 +0000 (0:00:00.952) 0:00:08.341 ******* 2025-04-09 08:54:36.740694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-09 08:54:36.742161 | orchestrator | 2025-04-09 08:54:36.742322 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-09 08:54:36.743477 | orchestrator | Wednesday 09 April 2025 08:54:36 +0000 (0:00:00.080) 0:00:08.422 ******* 2025-04-09 08:54:37.948926 | orchestrator | changed: [testbed-manager] 2025-04-09 08:54:37.949692 | orchestrator | 2025-04-09 08:54:37.950872 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:37.951673 | orchestrator | 2025-04-09 08:54:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:37.951984 | orchestrator | 2025-04-09 08:54:37 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:37.953471 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-09 08:54:37.953954 | orchestrator | 2025-04-09 08:54:37.954894 | orchestrator | 2025-04-09 08:54:37.955963 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 08:54:37.956666 | orchestrator | Wednesday 09 April 2025 08:54:37 +0000 (0:00:01.207) 0:00:09.629 ******* 2025-04-09 08:54:37.957228 | orchestrator | =============================================================================== 2025-04-09 08:54:37.957981 | orchestrator | Gathering Facts --------------------------------------------------------- 3.61s 2025-04-09 08:54:37.958872 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.21s 2025-04-09 08:54:37.959604 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2025-04-09 08:54:37.960823 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.08s 2025-04-09 08:54:37.961090 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-04-09 08:54:37.961685 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-04-09 08:54:37.962461 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-04-09 08:54:37.963177 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-04-09 08:54:37.964024 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-04-09 08:54:37.964956 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-04-09 08:54:37.965846 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-04-09 08:54:37.966130 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-04-09 08:54:37.966632 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-04-09 08:54:38.388395 | orchestrator | + osism apply sshconfig 2025-04-09 08:54:40.024233 | orchestrator | 2025-04-09 08:54:40 | INFO  | Task 852f7421-0b4b-4a2f-9874-aba5bd5e1eaa (sshconfig) was prepared for execution. 2025-04-09 08:54:43.928273 | orchestrator | 2025-04-09 08:54:40 | INFO  | It takes a moment until task 852f7421-0b4b-4a2f-9874-aba5bd5e1eaa (sshconfig) has been started and output is visible here. 2025-04-09 08:54:43.928497 | orchestrator | 2025-04-09 08:54:43.930761 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-09 08:54:43.930853 | orchestrator | 2025-04-09 08:54:43.931874 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-09 08:54:43.932938 | orchestrator | Wednesday 09 April 2025 08:54:43 +0000 (0:00:00.165) 0:00:00.165 ******* 2025-04-09 08:54:44.505147 | orchestrator | ok: [testbed-manager] 2025-04-09 08:54:44.506682 | orchestrator | 2025-04-09 08:54:44.994779 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-09 08:54:44.994894 | orchestrator | Wednesday 09 April 2025 08:54:44 +0000 (0:00:00.578) 0:00:00.744 ******* 2025-04-09 08:54:44.994927 | orchestrator | changed: [testbed-manager] 2025-04-09 08:54:44.996323 | orchestrator | 2025-04-09 08:54:44.996864 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-09 08:54:44.997693 | orchestrator | Wednesday 09 April 2025 08:54:44 +0000 (0:00:00.490) 0:00:01.234 ******* 2025-04-09 08:54:50.715583 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-09 08:54:50.715768 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-09 08:54:50.716186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-09 08:54:50.718841 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-09 08:54:50.719791 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-09 08:54:50.720776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-09 08:54:50.721895 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-09 08:54:50.722705 | orchestrator | 2025-04-09 08:54:50.723837 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-09 08:54:50.724560 | orchestrator | Wednesday 09 April 2025 08:54:50 +0000 (0:00:05.719) 0:00:06.954 ******* 2025-04-09 08:54:50.783775 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:54:50.784395 | orchestrator | 2025-04-09 08:54:50.785138 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-09 08:54:50.785955 | orchestrator | Wednesday 09 April 2025 08:54:50 +0000 (0:00:00.071) 0:00:07.025 ******* 2025-04-09 08:54:51.339092 | orchestrator | changed: [testbed-manager] 2025-04-09 08:54:51.340211 | orchestrator | 2025-04-09 08:54:51.340885 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:54:51.342392 | orchestrator | 2025-04-09 08:54:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:54:51.343441 | orchestrator | 2025-04-09 08:54:51 | INFO  | Please wait and do not abort execution. 2025-04-09 08:54:51.343477 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 08:54:51.343977 | orchestrator | 2025-04-09 08:54:51.344655 | orchestrator | 2025-04-09 08:54:51.345052 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 08:54:51.345821 | orchestrator | Wednesday 09 April 2025 08:54:51 +0000 (0:00:00.555) 0:00:07.581 ******* 2025-04-09 08:54:51.346240 | orchestrator | =============================================================================== 2025-04-09 08:54:51.346704 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.72s 2025-04-09 08:54:51.347002 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-04-09 08:54:51.347485 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-04-09 08:54:51.347953 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-04-09 08:54:51.348404 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-04-09 08:54:51.769371 | orchestrator | + osism apply known-hosts 2025-04-09 08:54:53.414617 | orchestrator | 2025-04-09 08:54:53 | INFO  | Task b96240ea-a67d-4f5b-a829-4e27f7c9ccb4 (known-hosts) was prepared for execution. 2025-04-09 08:54:57.248681 | orchestrator | 2025-04-09 08:54:53 | INFO  | It takes a moment until task b96240ea-a67d-4f5b-a829-4e27f7c9ccb4 (known-hosts) has been started and output is visible here. 2025-04-09 08:54:57.248807 | orchestrator | 2025-04-09 08:54:57.249222 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-09 08:54:57.250988 | orchestrator | 2025-04-09 08:54:57.251620 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-09 08:54:57.252214 | orchestrator | Wednesday 09 April 2025 08:54:57 +0000 (0:00:00.170) 0:00:00.170 ******* 2025-04-09 08:55:03.195143 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-09 08:55:03.195879 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-09 08:55:03.195915 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-09 08:55:03.195939 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-09 08:55:03.196393 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-09 08:55:03.198508 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-09 08:55:03.199202 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-09 08:55:03.199722 | orchestrator | 2025-04-09 08:55:03.200185 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-09 08:55:03.201258 | orchestrator | Wednesday 09 April 2025 08:55:03 +0000 (0:00:05.949) 0:00:06.119 ******* 2025-04-09 08:55:03.380434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-09 08:55:03.382922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-09 08:55:03.384302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-09 08:55:03.385473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-09 08:55:03.386305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-09 08:55:03.386960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-09 08:55:03.388780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-09 08:55:03.389323 | orchestrator | 2025-04-09 08:55:03.390014 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:03.390481 | orchestrator | Wednesday 09 April 2025 08:55:03 +0000 (0:00:00.185) 0:00:06.305 ******* 2025-04-09 08:55:04.547986 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWCpvYUQcR8sPFsGKyLumjsJg2ABAbBYGeDHZGWf0dHekI4vZHdevSCz9FgUpfa7jJT+QzA55AVcJHNv7+qFHdFthoHoJ36TkOp7VPPGvBzYOJ0vlscSyC6DfKEClVnMAYQYs7gBzcEWOnWyOYwrkn/6sKAmtLNMMg7FcxeZqGL8lPMXoFA0wolhIewBwhBWLBvCbv8/9zTGym9E4DTg4W3Oat5ovQxfj90uVleRYmOs6AEncT9xJBqD0qLFqLJh4zlr1KWjUvzAsV6f05PWmcheB0WN3rCKRmwu3j2IOrtuYNZZHzITJ4a/qWRabksxWQtAqp+lZj0B0lDg7G9ShDAzuPL8Gy+AfldAuKNIKAtjS+jqKK66D9j45KSoBk8w10nc7ZlYmYZWIx8M46ps2QWKPgRrtV6IjWwwd3traaqkF+5FqWFEBwgDM5Y2ooRdTobIsVFEi9HnYN8nqqZjLkInWTLpK/7f72Mo11gO0xmzzJtOMS1XAioLtT7Nn+rCM=) 2025-04-09 08:55:04.549757 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0BQUUJmy27IUDZ4Ydd+lpNBEbhoZq/jPs/YKTX3Q8MoToa4w1yJskyYMwk9yz7GE/BFkNl6nmgxj3+Ma1IgXA=) 2025-04-09 08:55:04.550435 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILS08a9siJcZ9fQ1M4IC2kY9TpSasjytrG5Qkk3wLQd+) 2025-04-09 08:55:04.550477 | orchestrator | 2025-04-09 08:55:04.551136 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:04.551486 | orchestrator | Wednesday 09 April 2025 08:55:04 +0000 (0:00:01.168) 0:00:07.473 ******* 2025-04-09 08:55:05.603681 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE82RFmfzyRBpGC+lrD5HAmILA9TWZipcALNHbu7L1wYAzayMB6Pnye5X0gpb+p1oChq9RyrGQhFxFdHAqR7HRIZnkID1HSKtCg/8zqOnlipnVReKKR8fMjm5RR4lL8Fz6SjG2h9B2ywdXZX89r773oPAJPAIHZlK1Uf0aJUqs6oQTD5QsyvriHb4b0ngxZONYAZji/84ukfj/0mLro88DOlGu7y5cT3am3qOXi223UiAmn+ryJ0KOGxtUvx2E1YzQADm8EBSo+MDzkXnLBBJzhUWJ2Z63Qhf/UfOfHYZjMhM1/zbre0FCkWk+5hNj3ocylLdUp6Ev5nxaOJjkIYqVLvusOQS2PFfjFOVPCOm54kfqyIPzlxrVM304Vk/V+0TUe5toHw7cSH2xM9hodsU06+XVYwEVSh74Gz7J/yb/rD/878mNxRmeggJ65vcx+k8UijYlFlY/fHkA6RqBezU76KvGFgNESKvUn0AMASX5segKsfTXi68mLKQMMDMC4s=) 2025-04-09 08:55:05.604193 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCbA5pJRkr1W2uRIDZ/qlqgZG5zRmIBuh9Mv/v5B4nOpPApOpFcOO4IVWbJg0TSPlxBN031DqiM4+LW1qTkTIUQ=) 2025-04-09 08:55:05.605777 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCC7yKnjsBnmxJVuDp5f9CpmUBgUNMTuF9IT3JFfc6N) 2025-04-09 08:55:05.606449 | orchestrator | 2025-04-09 08:55:05.606488 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:05.607073 | orchestrator | Wednesday 09 April 2025 08:55:05 +0000 (0:00:01.055) 0:00:08.528 ******* 2025-04-09 08:55:06.673722 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBEfCT7UsQt8wVCsALpkqZqVM3J4xvSJGVhSQHR2Otz2y1Loz4HA77zUdGLG4LUxGVYVSZfNdyMVIP1m/xL9KJE1aVHR64Db5Shf9xPqNlRjtX73uPPEonbZhytFwt6HCUWvK/6tAaUmH5oAl8/QcEQQpv5oR/SKT3v8XSlOa1efF/m3zHckiM6j4s75q9Hy440avlBpYZHXAFpqNg9WophUAAWa9yCetQ927jaBCxYXFPyNbjE/a28laRJEYAwolxnGZJDj1fwT8+WXLPGGLy3qvvyQDsU0DvvBbuXhpK/e1wkvK9yWSdY4a9Kt2HsPbOBnlBHMcvqOwBVAyGjq0uohrWsFF9iBdMNQIMGhAev/cRTuyabbWs19DgxLjNxRBOPGMG0EhNBvJl0ON3FpNL6PWWBQ/TTpiBWQFTQN/ZEbEijl2+AGKgKjLSudUxPieAuZETtu46ueSh+Kcfe+ulXbIgeA9TKDKxvGki9Pl+DEG41zvzvZ0rToyfradzl90=) 2025-04-09 08:55:06.674097 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdxSyDTp5OWyk3ptxXtoKTOBaX4d0gD9CijIMdohQrcWmuZD2Cu7Hhm4gbh9V0sJXQJjmaqZXigyPamZVGFmF8=) 2025-04-09 08:55:06.674745 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB3lW6tYTZUFJiXtvdNJOqmvsQG4Qf1nq1sIZQXrGnwd) 2025-04-09 08:55:06.675503 | orchestrator | 2025-04-09 08:55:06.676193 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:06.676889 | orchestrator | Wednesday 09 April 2025 08:55:06 +0000 (0:00:01.070) 0:00:09.599 ******* 2025-04-09 08:55:07.726391 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVsbNAOx8Ue3twRXQowBAxvUDuPcmahDxJVBEPN9tbo) 2025-04-09 08:55:07.727187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkOmArDkYWZ/kSqLVEX3/UdWJENZtyPUBtGASXjCjiXdehmk6GpRTZrmTWwwe9bfRsg06KkOHOclVAfKxqAQS3TT8cnS9nK/kFaLj89pAhTG9lK+D/XynzcXeEOirAgsaVn2k7SFUNe3PSS2PwReu6A8iSnyd6GmgUzS3kspTiiVQ3SerY2VI2QOJq9QwJKh3oEHV3yxGZzGMZBn917TcqkTb0DPDPdd8ZSbe/eq7jCAQYplAvnPtOpG81g8JSyesJKoWkIntfSXdkAzphUHvJJm/qjehQcrFw8+sM5oPPO3EHEl/1yG0q91Zp06Rp+omORz1l2r0l0pBvBID9Gx14isGtCFENOQ1sbudEOWRW4j0cvbfY3pLVlNRhOVBoK++P7D48Zr6xE3Ko4FzeKC+5XCA9cUNI+9otaHjmMAXi9cVS/1GS/VJu2Vt/OvddYiug3cjrDpQcPfFjksygMpKoynS7I7jG8mBLLua8Qlaq1s1mcXFpQM88fdW0Qux0TTc=) 2025-04-09 08:55:07.727443 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHCZREGH32jicTR00eLizwPU5YIlHZkOfBQj7LCP9vmzbdktIjs2vujiKW/npt6yvAs3P5CUUaPfIsh8TWsHcoI=) 2025-04-09 08:55:07.728587 | orchestrator | 2025-04-09 08:55:07.729599 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:07.731045 | orchestrator | Wednesday 09 April 2025 08:55:07 +0000 (0:00:01.052) 0:00:10.652 ******* 2025-04-09 08:55:08.777170 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfcU5NrRR94sATwNFxwAJUV0vZxm9Wj5z+3cwmgxudeZSSYlms9M8cHlFLRepd5JocV4MQ9QZbDKpclvoHUN+Mjie2/DGa54DprZn7UYiMdbl/3fGs5YugtKFAU2ZoMWbvTgSKLrlISQ+ZsiuMjgG8kY7kQ7xmyIFT2EbMxvrU2Bse0LHj3xTk7V3BxCQQVZ7hnwNmQNZ+S2jpRTHkfekeXPIZsMXPpnID89+7XKtAECIffGP7Crq0DNYaOV0O3O6eBHoZHMTLdliK+MZ2936Hr/l07q692tLCnOQVBroIRUPpyTt5fTtrkB8i0LfcEDvogiaHuEWiviWFiV6TNiCUsT4PWT7VH5CLV3x1eC5m3oqnw0k/JOsf9COvEiPtgOQmiWh9iNh9t1E0Umxu43jzABelcXR1uOcuJQy/ixcq0phF8mK5ebyu8WU2LNPpQslQy6o8xdCjHALQ1FdKczq0lfooB/7FRUG01rSJ5lUjDO7CmWxSE/9Fm4NRSsFOzrE=) 2025-04-09 08:55:08.778742 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJDan8VdWWZV+x4LaEWdWlDS9VvKAuosT3TlnYv4ic/W9eSg5gKFuebSbxwPTH2z/ovOrvDXPxtZAvs4bSb/iOE=) 2025-04-09 08:55:08.779798 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILFORW/R70OCPFt0l5F0nfdYxfmudTMNpmo9vFoSsF/s) 2025-04-09 08:55:08.780524 | orchestrator | 2025-04-09 08:55:08.781325 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:08.782307 | orchestrator | Wednesday 09 April 2025 08:55:08 +0000 (0:00:01.050) 0:00:11.702 ******* 2025-04-09 08:55:09.838318 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIC1RXflz9NX0COJA4P0IGpSyasy6Al6MtR52yWLCT/s7gZ59fY7hyzv/Qqlc03aZPkianrJnZkt5H+jJFrv1VXSD/vmC3CPHbkgVaN2Z0acvPHhVHQGyiR+ZDGEsEV+CdljdRQeoSEM/YD/3e+37rjSg1tj8+3c9jK6fBwMRz7CMuX0vxBB4859TFWwWVpYivck801xWncYuebi7C9Hk/VcU9YwEXS6X6rPmEFW4hC/HrkoOV7uMAwNpWieggMSIvDNCWPePD8cRpT3D1fgWMpYxxckeRmtf+6UYn6+CAarTF8RT7nbeUT2ibVd+hFvj2D5FIRF5BBHf/jp/bTPjPPCin8AFL6pX1VjIaPSnNrxGwihLrJHMKqK8qnU7nGOe2rFsNL67iUVKX++8Akfcf9w6aFdMVkJnw+kxPN2ssNzogbsONHkZvvXjntgkSoI2q/YA50DFx025lBA2qafSMy5PIGrYIK28AP9U358GDwMlhosZ30n5Os4W54/08eW0=) 2025-04-09 08:55:09.838663 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK1ef8sbjGF7xbsuJzrP61NRWYiJfMdUPJ9uclyGhACGGDF9zs7V+P63OEoPBqr/A+qWY+7Ofa5KeWx19Ash+MI=) 2025-04-09 08:55:09.839378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFX7Kw98NtognVEMbFMbVR9MzxYCZAfyYMPBb+YMipAj) 2025-04-09 08:55:09.840197 | orchestrator | 2025-04-09 08:55:09.840734 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:09.841675 | orchestrator | Wednesday 09 April 2025 08:55:09 +0000 (0:00:01.061) 0:00:12.763 ******* 2025-04-09 08:55:10.927692 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe6CNFri8elDx3dDIebxwaluIjlofQFif5p+yxipPEWtKdiUByEYPIlB2Ah1r+K2jVpmvhqodGOUc6tbTdzvk1cvjE800tEqncZDH1i/wfPnR9iEVzzukEEckAWXTFtyHo1J4ksKXiW76esz2rP7of9ImiDamYAUbLQ4KU5oJR843qDqdWtVG/B5AaT7vUaxbb65t0FqMS9hYLkTcMthneU17Ww0gmzLbbcFwLWqWadcDC8EwNADatyKAiRFV/PMhyoJLDAPXbwnLBTnStQfTtKLCm1eSqqspSTNcAD06Pig+UV69oU00D0Dj3Brw3zo8rpuBwp+vdHdHIpcVtW4ZpZR4GgZwN3yt1t1ASM1vl+E7DA2sTed+kCRPrO2DIEd4RUc0dZqvSKp/Y+BAc/u5eaVU5G1qnpdXV+zJyghwCmnzrRTvGLVE1G5U3jQVjJ6I3gTdr1WTmBvAmSHh4YIkRnvT3I1UqdmtclsQLn0mUp1N9Zd9dJJy2fZLZXO2q1Ek=) 2025-04-09 08:55:10.928065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF8XJaqJGDBJq8fVaIn6VoH37fu/9O/MmsEx73foZENMDOvURFwQYbuwr9+mkqTx2OA9BzC24wLsVT0RRGg1uAE=) 2025-04-09 08:55:10.929367 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIsQAVraJorxI2JRbQDZl10EP+1bVGdFFO7KWECPuu+1) 2025-04-09 08:55:10.930237 | orchestrator | 2025-04-09 08:55:10.930960 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-09 08:55:10.931446 | orchestrator | Wednesday 09 April 2025 08:55:10 +0000 (0:00:01.089) 0:00:13.852 ******* 2025-04-09 08:55:16.372551 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-09 08:55:16.372845 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-09 08:55:16.372886 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-09 08:55:16.373573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-09 08:55:16.374585 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-09 08:55:16.376104 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-09 08:55:16.376752 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-09 08:55:16.377383 | orchestrator | 2025-04-09 08:55:16.378106 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-09 08:55:16.378763 | orchestrator | Wednesday 09 April 2025 08:55:16 +0000 (0:00:05.443) 0:00:19.296 ******* 2025-04-09 08:55:16.538983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-09 08:55:16.539763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-09 08:55:16.540560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-09 08:55:16.541376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-09 08:55:16.541782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-09 08:55:16.542541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-09 08:55:16.543107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-09 08:55:16.543959 | orchestrator | 2025-04-09 08:55:16.544212 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:16.544929 | orchestrator | Wednesday 09 April 2025 08:55:16 +0000 (0:00:00.168) 0:00:19.465 ******* 2025-04-09 08:55:17.598147 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWCpvYUQcR8sPFsGKyLumjsJg2ABAbBYGeDHZGWf0dHekI4vZHdevSCz9FgUpfa7jJT+QzA55AVcJHNv7+qFHdFthoHoJ36TkOp7VPPGvBzYOJ0vlscSyC6DfKEClVnMAYQYs7gBzcEWOnWyOYwrkn/6sKAmtLNMMg7FcxeZqGL8lPMXoFA0wolhIewBwhBWLBvCbv8/9zTGym9E4DTg4W3Oat5ovQxfj90uVleRYmOs6AEncT9xJBqD0qLFqLJh4zlr1KWjUvzAsV6f05PWmcheB0WN3rCKRmwu3j2IOrtuYNZZHzITJ4a/qWRabksxWQtAqp+lZj0B0lDg7G9ShDAzuPL8Gy+AfldAuKNIKAtjS+jqKK66D9j45KSoBk8w10nc7ZlYmYZWIx8M46ps2QWKPgRrtV6IjWwwd3traaqkF+5FqWFEBwgDM5Y2ooRdTobIsVFEi9HnYN8nqqZjLkInWTLpK/7f72Mo11gO0xmzzJtOMS1XAioLtT7Nn+rCM=) 2025-04-09 08:55:17.598613 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0BQUUJmy27IUDZ4Ydd+lpNBEbhoZq/jPs/YKTX3Q8MoToa4w1yJskyYMwk9yz7GE/BFkNl6nmgxj3+Ma1IgXA=) 2025-04-09 08:55:17.599204 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILS08a9siJcZ9fQ1M4IC2kY9TpSasjytrG5Qkk3wLQd+) 2025-04-09 08:55:17.599661 | orchestrator | 2025-04-09 08:55:17.600301 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:17.600758 | orchestrator | Wednesday 09 April 2025 08:55:17 +0000 (0:00:01.059) 0:00:20.524 ******* 2025-04-09 08:55:18.664094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE82RFmfzyRBpGC+lrD5HAmILA9TWZipcALNHbu7L1wYAzayMB6Pnye5X0gpb+p1oChq9RyrGQhFxFdHAqR7HRIZnkID1HSKtCg/8zqOnlipnVReKKR8fMjm5RR4lL8Fz6SjG2h9B2ywdXZX89r773oPAJPAIHZlK1Uf0aJUqs6oQTD5QsyvriHb4b0ngxZONYAZji/84ukfj/0mLro88DOlGu7y5cT3am3qOXi223UiAmn+ryJ0KOGxtUvx2E1YzQADm8EBSo+MDzkXnLBBJzhUWJ2Z63Qhf/UfOfHYZjMhM1/zbre0FCkWk+5hNj3ocylLdUp6Ev5nxaOJjkIYqVLvusOQS2PFfjFOVPCOm54kfqyIPzlxrVM304Vk/V+0TUe5toHw7cSH2xM9hodsU06+XVYwEVSh74Gz7J/yb/rD/878mNxRmeggJ65vcx+k8UijYlFlY/fHkA6RqBezU76KvGFgNESKvUn0AMASX5segKsfTXi68mLKQMMDMC4s=) 2025-04-09 08:55:18.664290 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCbA5pJRkr1W2uRIDZ/qlqgZG5zRmIBuh9Mv/v5B4nOpPApOpFcOO4IVWbJg0TSPlxBN031DqiM4+LW1qTkTIUQ=) 2025-04-09 08:55:18.664402 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCC7yKnjsBnmxJVuDp5f9CpmUBgUNMTuF9IT3JFfc6N) 2025-04-09 08:55:18.664776 | orchestrator | 2025-04-09 08:55:18.665219 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:18.665714 | orchestrator | Wednesday 09 April 2025 08:55:18 +0000 (0:00:01.065) 0:00:21.589 ******* 2025-04-09 08:55:19.732624 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdxSyDTp5OWyk3ptxXtoKTOBaX4d0gD9CijIMdohQrcWmuZD2Cu7Hhm4gbh9V0sJXQJjmaqZXigyPamZVGFmF8=) 2025-04-09 08:55:19.733176 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBEfCT7UsQt8wVCsALpkqZqVM3J4xvSJGVhSQHR2Otz2y1Loz4HA77zUdGLG4LUxGVYVSZfNdyMVIP1m/xL9KJE1aVHR64Db5Shf9xPqNlRjtX73uPPEonbZhytFwt6HCUWvK/6tAaUmH5oAl8/QcEQQpv5oR/SKT3v8XSlOa1efF/m3zHckiM6j4s75q9Hy440avlBpYZHXAFpqNg9WophUAAWa9yCetQ927jaBCxYXFPyNbjE/a28laRJEYAwolxnGZJDj1fwT8+WXLPGGLy3qvvyQDsU0DvvBbuXhpK/e1wkvK9yWSdY4a9Kt2HsPbOBnlBHMcvqOwBVAyGjq0uohrWsFF9iBdMNQIMGhAev/cRTuyabbWs19DgxLjNxRBOPGMG0EhNBvJl0ON3FpNL6PWWBQ/TTpiBWQFTQN/ZEbEijl2+AGKgKjLSudUxPieAuZETtu46ueSh+Kcfe+ulXbIgeA9TKDKxvGki9Pl+DEG41zvzvZ0rToyfradzl90=) 2025-04-09 08:55:19.735492 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB3lW6tYTZUFJiXtvdNJOqmvsQG4Qf1nq1sIZQXrGnwd) 2025-04-09 08:55:19.736722 | orchestrator | 2025-04-09 08:55:19.739436 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:19.740272 | orchestrator | Wednesday 09 April 2025 08:55:19 +0000 (0:00:01.067) 0:00:22.657 ******* 2025-04-09 08:55:20.800977 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkOmArDkYWZ/kSqLVEX3/UdWJENZtyPUBtGASXjCjiXdehmk6GpRTZrmTWwwe9bfRsg06KkOHOclVAfKxqAQS3TT8cnS9nK/kFaLj89pAhTG9lK+D/XynzcXeEOirAgsaVn2k7SFUNe3PSS2PwReu6A8iSnyd6GmgUzS3kspTiiVQ3SerY2VI2QOJq9QwJKh3oEHV3yxGZzGMZBn917TcqkTb0DPDPdd8ZSbe/eq7jCAQYplAvnPtOpG81g8JSyesJKoWkIntfSXdkAzphUHvJJm/qjehQcrFw8+sM5oPPO3EHEl/1yG0q91Zp06Rp+omORz1l2r0l0pBvBID9Gx14isGtCFENOQ1sbudEOWRW4j0cvbfY3pLVlNRhOVBoK++P7D48Zr6xE3Ko4FzeKC+5XCA9cUNI+9otaHjmMAXi9cVS/1GS/VJu2Vt/OvddYiug3cjrDpQcPfFjksygMpKoynS7I7jG8mBLLua8Qlaq1s1mcXFpQM88fdW0Qux0TTc=) 2025-04-09 08:55:20.802067 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHCZREGH32jicTR00eLizwPU5YIlHZkOfBQj7LCP9vmzbdktIjs2vujiKW/npt6yvAs3P5CUUaPfIsh8TWsHcoI=) 2025-04-09 08:55:20.802429 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVsbNAOx8Ue3twRXQowBAxvUDuPcmahDxJVBEPN9tbo) 2025-04-09 08:55:20.803214 | orchestrator | 2025-04-09 08:55:20.803767 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:20.804533 | orchestrator | Wednesday 09 April 2025 08:55:20 +0000 (0:00:01.069) 0:00:23.726 ******* 2025-04-09 08:55:21.882163 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfcU5NrRR94sATwNFxwAJUV0vZxm9Wj5z+3cwmgxudeZSSYlms9M8cHlFLRepd5JocV4MQ9QZbDKpclvoHUN+Mjie2/DGa54DprZn7UYiMdbl/3fGs5YugtKFAU2ZoMWbvTgSKLrlISQ+ZsiuMjgG8kY7kQ7xmyIFT2EbMxvrU2Bse0LHj3xTk7V3BxCQQVZ7hnwNmQNZ+S2jpRTHkfekeXPIZsMXPpnID89+7XKtAECIffGP7Crq0DNYaOV0O3O6eBHoZHMTLdliK+MZ2936Hr/l07q692tLCnOQVBroIRUPpyTt5fTtrkB8i0LfcEDvogiaHuEWiviWFiV6TNiCUsT4PWT7VH5CLV3x1eC5m3oqnw0k/JOsf9COvEiPtgOQmiWh9iNh9t1E0Umxu43jzABelcXR1uOcuJQy/ixcq0phF8mK5ebyu8WU2LNPpQslQy6o8xdCjHALQ1FdKczq0lfooB/7FRUG01rSJ5lUjDO7CmWxSE/9Fm4NRSsFOzrE=) 2025-04-09 08:55:21.883273 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJDan8VdWWZV+x4LaEWdWlDS9VvKAuosT3TlnYv4ic/W9eSg5gKFuebSbxwPTH2z/ovOrvDXPxtZAvs4bSb/iOE=) 2025-04-09 08:55:21.883317 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILFORW/R70OCPFt0l5F0nfdYxfmudTMNpmo9vFoSsF/s) 2025-04-09 08:55:21.884226 | orchestrator | 2025-04-09 08:55:21.884839 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:21.886358 | orchestrator | Wednesday 09 April 2025 08:55:21 +0000 (0:00:01.079) 0:00:24.806 ******* 2025-04-09 08:55:22.971213 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIC1RXflz9NX0COJA4P0IGpSyasy6Al6MtR52yWLCT/s7gZ59fY7hyzv/Qqlc03aZPkianrJnZkt5H+jJFrv1VXSD/vmC3CPHbkgVaN2Z0acvPHhVHQGyiR+ZDGEsEV+CdljdRQeoSEM/YD/3e+37rjSg1tj8+3c9jK6fBwMRz7CMuX0vxBB4859TFWwWVpYivck801xWncYuebi7C9Hk/VcU9YwEXS6X6rPmEFW4hC/HrkoOV7uMAwNpWieggMSIvDNCWPePD8cRpT3D1fgWMpYxxckeRmtf+6UYn6+CAarTF8RT7nbeUT2ibVd+hFvj2D5FIRF5BBHf/jp/bTPjPPCin8AFL6pX1VjIaPSnNrxGwihLrJHMKqK8qnU7nGOe2rFsNL67iUVKX++8Akfcf9w6aFdMVkJnw+kxPN2ssNzogbsONHkZvvXjntgkSoI2q/YA50DFx025lBA2qafSMy5PIGrYIK28AP9U358GDwMlhosZ30n5Os4W54/08eW0=) 2025-04-09 08:55:22.971498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK1ef8sbjGF7xbsuJzrP61NRWYiJfMdUPJ9uclyGhACGGDF9zs7V+P63OEoPBqr/A+qWY+7Ofa5KeWx19Ash+MI=) 2025-04-09 08:55:22.972439 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFX7Kw98NtognVEMbFMbVR9MzxYCZAfyYMPBb+YMipAj) 2025-04-09 08:55:22.973591 | orchestrator | 2025-04-09 08:55:22.974378 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-09 08:55:22.975407 | orchestrator | Wednesday 09 April 2025 08:55:22 +0000 (0:00:01.089) 0:00:25.896 ******* 2025-04-09 08:55:24.085746 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCe6CNFri8elDx3dDIebxwaluIjlofQFif5p+yxipPEWtKdiUByEYPIlB2Ah1r+K2jVpmvhqodGOUc6tbTdzvk1cvjE800tEqncZDH1i/wfPnR9iEVzzukEEckAWXTFtyHo1J4ksKXiW76esz2rP7of9ImiDamYAUbLQ4KU5oJR843qDqdWtVG/B5AaT7vUaxbb65t0FqMS9hYLkTcMthneU17Ww0gmzLbbcFwLWqWadcDC8EwNADatyKAiRFV/PMhyoJLDAPXbwnLBTnStQfTtKLCm1eSqqspSTNcAD06Pig+UV69oU00D0Dj3Brw3zo8rpuBwp+vdHdHIpcVtW4ZpZR4GgZwN3yt1t1ASM1vl+E7DA2sTed+kCRPrO2DIEd4RUc0dZqvSKp/Y+BAc/u5eaVU5G1qnpdXV+zJyghwCmnzrRTvGLVE1G5U3jQVjJ6I3gTdr1WTmBvAmSHh4YIkRnvT3I1UqdmtclsQLn0mUp1N9Zd9dJJy2fZLZXO2q1Ek=) 2025-04-09 08:55:24.086730 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF8XJaqJGDBJq8fVaIn6VoH37fu/9O/MmsEx73foZENMDOvURFwQYbuwr9+mkqTx2OA9BzC24wLsVT0RRGg1uAE=) 2025-04-09 08:55:24.087099 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIsQAVraJorxI2JRbQDZl10EP+1bVGdFFO7KWECPuu+1) 2025-04-09 08:55:24.087663 | orchestrator | 2025-04-09 08:55:24.088283 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-09 08:55:24.088932 | orchestrator | Wednesday 09 April 2025 08:55:24 +0000 (0:00:01.111) 0:00:27.008 ******* 2025-04-09 08:55:24.486460 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-09 08:55:24.486841 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-09 08:55:24.487509 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-09 08:55:24.488208 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-09 08:55:24.489968 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-09 08:55:24.490197 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-09 08:55:24.490225 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-09 08:55:24.490244 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:55:24.490703 | orchestrator | 2025-04-09 08:55:24.491016 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-09 08:55:24.491436 | orchestrator | Wednesday 09 April 2025 08:55:24 +0000 (0:00:00.405) 0:00:27.413 ******* 2025-04-09 08:55:24.556050 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:55:24.556309 | orchestrator | 2025-04-09 08:55:24.557516 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-09 08:55:24.558323 | orchestrator | Wednesday 09 April 2025 08:55:24 +0000 (0:00:00.068) 0:00:27.481 ******* 2025-04-09 08:55:24.624451 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:55:24.624565 | orchestrator | 2025-04-09 08:55:24.624590 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-09 08:55:24.625591 | orchestrator | Wednesday 09 April 2025 08:55:24 +0000 (0:00:00.067) 0:00:27.549 ******* 2025-04-09 08:55:25.159600 | orchestrator | changed: [testbed-manager] 2025-04-09 08:55:25.160259 | orchestrator | 2025-04-09 08:55:25.162417 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:55:25.163087 | orchestrator | 2025-04-09 08:55:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:55:25.166188 | orchestrator | 2025-04-09 08:55:25 | INFO  | Please wait and do not abort execution. 2025-04-09 08:55:25.166324 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-09 08:55:25.166424 | orchestrator | 2025-04-09 08:55:25.166486 | orchestrator | 2025-04-09 08:55:25.166543 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 08:55:25.166627 | orchestrator | Wednesday 09 April 2025 08:55:25 +0000 (0:00:00.538) 0:00:28.087 ******* 2025-04-09 08:55:25.170403 | orchestrator | =============================================================================== 2025-04-09 08:55:25.170479 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.95s 2025-04-09 08:55:25.170497 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.44s 2025-04-09 08:55:25.170512 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-09 08:55:25.170526 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-04-09 08:55:25.170540 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-09 08:55:25.170572 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-09 08:55:25.170587 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-04-09 08:55:25.170625 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-04-09 08:55:25.170640 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-04-09 08:55:25.170654 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-04-09 08:55:25.170668 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-04-09 08:55:25.170682 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-04-09 08:55:25.170696 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-04-09 08:55:25.170730 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-04-09 08:55:25.170749 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-04-09 08:55:25.170972 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-04-09 08:55:25.171003 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.54s 2025-04-09 08:55:25.171201 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.41s 2025-04-09 08:55:25.171563 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-04-09 08:55:25.171756 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-04-09 08:55:25.602595 | orchestrator | + osism apply squid 2025-04-09 08:55:27.223254 | orchestrator | 2025-04-09 08:55:27 | INFO  | Task a044b26a-0a0a-4e56-b7b2-f0b804560c4b (squid) was prepared for execution. 2025-04-09 08:55:31.039410 | orchestrator | 2025-04-09 08:55:27 | INFO  | It takes a moment until task a044b26a-0a0a-4e56-b7b2-f0b804560c4b (squid) has been started and output is visible here. 2025-04-09 08:55:31.039544 | orchestrator | 2025-04-09 08:55:31.041452 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-09 08:55:31.041617 | orchestrator | 2025-04-09 08:55:31.041647 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-09 08:55:31.043129 | orchestrator | Wednesday 09 April 2025 08:55:31 +0000 (0:00:00.174) 0:00:00.174 ******* 2025-04-09 08:55:31.140695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-09 08:55:31.143558 | orchestrator | 2025-04-09 08:55:31.143593 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-09 08:55:31.144509 | orchestrator | Wednesday 09 April 2025 08:55:31 +0000 (0:00:00.103) 0:00:00.278 ******* 2025-04-09 08:55:32.549435 | orchestrator | ok: [testbed-manager] 2025-04-09 08:55:32.551917 | orchestrator | 2025-04-09 08:55:32.552276 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-09 08:55:32.552308 | orchestrator | Wednesday 09 April 2025 08:55:32 +0000 (0:00:01.407) 0:00:01.686 ******* 2025-04-09 08:55:33.757072 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-09 08:55:33.757447 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-09 08:55:33.758495 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-09 08:55:33.759501 | orchestrator | 2025-04-09 08:55:33.760103 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-09 08:55:33.760829 | orchestrator | Wednesday 09 April 2025 08:55:33 +0000 (0:00:01.206) 0:00:02.892 ******* 2025-04-09 08:55:34.818669 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-09 08:55:34.819134 | orchestrator | 2025-04-09 08:55:34.820024 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-09 08:55:34.820786 | orchestrator | Wednesday 09 April 2025 08:55:34 +0000 (0:00:01.060) 0:00:03.953 ******* 2025-04-09 08:55:35.162725 | orchestrator | ok: [testbed-manager] 2025-04-09 08:55:35.163171 | orchestrator | 2025-04-09 08:55:36.066238 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-09 08:55:36.066395 | orchestrator | Wednesday 09 April 2025 08:55:35 +0000 (0:00:00.347) 0:00:04.300 ******* 2025-04-09 08:55:36.066434 | orchestrator | changed: [testbed-manager] 2025-04-09 08:55:36.066558 | orchestrator | 2025-04-09 08:55:36.066576 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-09 08:55:36.066806 | orchestrator | Wednesday 09 April 2025 08:55:36 +0000 (0:00:00.902) 0:00:05.203 ******* 2025-04-09 08:56:08.094364 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-09 08:56:08.094609 | orchestrator | ok: [testbed-manager] 2025-04-09 08:56:08.094670 | orchestrator | 2025-04-09 08:56:08.094688 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-09 08:56:08.094727 | orchestrator | Wednesday 09 April 2025 08:56:08 +0000 (0:00:32.021) 0:00:37.225 ******* 2025-04-09 08:56:20.448568 | orchestrator | changed: [testbed-manager] 2025-04-09 08:57:20.531590 | orchestrator | 2025-04-09 08:57:20.531728 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-09 08:57:20.531749 | orchestrator | Wednesday 09 April 2025 08:56:20 +0000 (0:00:12.356) 0:00:49.582 ******* 2025-04-09 08:57:20.531782 | orchestrator | Pausing for 60 seconds 2025-04-09 08:57:20.599401 | orchestrator | changed: [testbed-manager] 2025-04-09 08:57:20.599450 | orchestrator | 2025-04-09 08:57:20.599466 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-09 08:57:20.599481 | orchestrator | Wednesday 09 April 2025 08:57:20 +0000 (0:01:00.079) 0:01:49.661 ******* 2025-04-09 08:57:20.599506 | orchestrator | ok: [testbed-manager] 2025-04-09 08:57:20.599727 | orchestrator | 2025-04-09 08:57:20.600856 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-09 08:57:20.601354 | orchestrator | Wednesday 09 April 2025 08:57:20 +0000 (0:00:00.072) 0:01:49.734 ******* 2025-04-09 08:57:21.213897 | orchestrator | changed: [testbed-manager] 2025-04-09 08:57:21.214391 | orchestrator | 2025-04-09 08:57:21.214442 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:57:21.214825 | orchestrator | 2025-04-09 08:57:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:57:21.215400 | orchestrator | 2025-04-09 08:57:21 | INFO  | Please wait and do not abort execution. 2025-04-09 08:57:21.216755 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:57:21.217135 | orchestrator | 2025-04-09 08:57:21.217937 | orchestrator | 2025-04-09 08:57:21.218415 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 08:57:21.218987 | orchestrator | Wednesday 09 April 2025 08:57:21 +0000 (0:00:00.613) 0:01:50.347 ******* 2025-04-09 08:57:21.220037 | orchestrator | =============================================================================== 2025-04-09 08:57:21.220616 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-04-09 08:57:21.221121 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.02s 2025-04-09 08:57:21.221762 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.36s 2025-04-09 08:57:21.221894 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-04-09 08:57:21.222203 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2025-04-09 08:57:21.222755 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-04-09 08:57:21.223163 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2025-04-09 08:57:21.223475 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-04-09 08:57:21.223842 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-04-09 08:57:21.224200 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-04-09 08:57:21.224626 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-04-09 08:57:21.719556 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-09 08:57:21.720342 | orchestrator | ++ semver latest 9.0.0 2025-04-09 08:57:21.781025 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-09 08:57:21.781828 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-04-09 08:57:21.781861 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-09 08:57:23.436922 | orchestrator | 2025-04-09 08:57:23 | INFO  | Task 8705bbca-08c9-463e-8104-af82348eec5d (operator) was prepared for execution. 2025-04-09 08:57:27.274419 | orchestrator | 2025-04-09 08:57:23 | INFO  | It takes a moment until task 8705bbca-08c9-463e-8104-af82348eec5d (operator) has been started and output is visible here. 2025-04-09 08:57:27.274598 | orchestrator | 2025-04-09 08:57:27.274676 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-09 08:57:27.275443 | orchestrator | 2025-04-09 08:57:27.277367 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-09 08:57:31.574771 | orchestrator | Wednesday 09 April 2025 08:57:27 +0000 (0:00:00.147) 0:00:00.147 ******* 2025-04-09 08:57:31.574901 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:57:31.575656 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:57:31.575684 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:31.575702 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:31.575756 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:31.575771 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:57:31.575786 | orchestrator | 2025-04-09 08:57:31.576113 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-09 08:57:31.576604 | orchestrator | Wednesday 09 April 2025 08:57:31 +0000 (0:00:04.295) 0:00:04.442 ******* 2025-04-09 08:57:32.346548 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:57:32.346719 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:57:32.347370 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:32.348680 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:32.349990 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:57:32.350352 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:32.351031 | orchestrator | 2025-04-09 08:57:32.351465 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-09 08:57:32.351967 | orchestrator | 2025-04-09 08:57:32.352537 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-09 08:57:32.353063 | orchestrator | Wednesday 09 April 2025 08:57:32 +0000 (0:00:00.777) 0:00:05.220 ******* 2025-04-09 08:57:32.447592 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:57:32.473970 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:57:32.511833 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:57:32.571130 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:32.575725 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:32.577001 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:32.577025 | orchestrator | 2025-04-09 08:57:32.577040 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-09 08:57:32.577058 | orchestrator | Wednesday 09 April 2025 08:57:32 +0000 (0:00:00.226) 0:00:05.446 ******* 2025-04-09 08:57:32.639313 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:57:32.664229 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:57:32.693248 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:57:32.759456 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:32.760769 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:32.767234 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:33.408983 | orchestrator | 2025-04-09 08:57:33.409065 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-09 08:57:33.409081 | orchestrator | Wednesday 09 April 2025 08:57:32 +0000 (0:00:00.187) 0:00:05.633 ******* 2025-04-09 08:57:33.409106 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:33.409648 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:33.410599 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:33.411474 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:33.412019 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:33.412687 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:33.413146 | orchestrator | 2025-04-09 08:57:33.413704 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-09 08:57:33.414172 | orchestrator | Wednesday 09 April 2025 08:57:33 +0000 (0:00:00.648) 0:00:06.282 ******* 2025-04-09 08:57:34.235176 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:34.235753 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:34.237054 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:34.237868 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:34.238873 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:34.239725 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:34.240427 | orchestrator | 2025-04-09 08:57:34.241220 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-09 08:57:34.241871 | orchestrator | Wednesday 09 April 2025 08:57:34 +0000 (0:00:00.826) 0:00:07.109 ******* 2025-04-09 08:57:35.440724 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-09 08:57:35.444787 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-09 08:57:35.447026 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-09 08:57:35.447993 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-09 08:57:35.448809 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-09 08:57:35.449760 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-09 08:57:35.450845 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-09 08:57:35.451406 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-09 08:57:35.452351 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-09 08:57:35.453459 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-09 08:57:35.454109 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-09 08:57:35.454938 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-09 08:57:35.455751 | orchestrator | 2025-04-09 08:57:35.456366 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-09 08:57:35.456842 | orchestrator | Wednesday 09 April 2025 08:57:35 +0000 (0:00:01.204) 0:00:08.313 ******* 2025-04-09 08:57:36.693512 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:36.694535 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:36.695448 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:36.696591 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:36.697763 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:36.699015 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:36.700796 | orchestrator | 2025-04-09 08:57:36.702229 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-09 08:57:36.703225 | orchestrator | Wednesday 09 April 2025 08:57:36 +0000 (0:00:01.251) 0:00:09.564 ******* 2025-04-09 08:57:37.924493 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-09 08:57:37.929389 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-09 08:57:37.930431 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-09 08:57:37.998101 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-09 08:57:37.998565 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-09 08:57:37.999469 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-09 08:57:38.000671 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-09 08:57:38.001763 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-09 08:57:38.002060 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-09 08:57:38.003773 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-09 08:57:38.003965 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-09 08:57:38.004450 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-09 08:57:38.004479 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-09 08:57:38.005087 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-09 08:57:38.005905 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-09 08:57:38.006547 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-09 08:57:38.007031 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-09 08:57:38.007783 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-09 08:57:38.009060 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-09 08:57:38.009465 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-09 08:57:38.009889 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-09 08:57:38.010419 | orchestrator | 2025-04-09 08:57:38.010982 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-09 08:57:38.011208 | orchestrator | Wednesday 09 April 2025 08:57:37 +0000 (0:00:01.308) 0:00:10.873 ******* 2025-04-09 08:57:38.579832 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:38.580021 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:38.581132 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:38.581876 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:38.582614 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:38.583213 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:38.584885 | orchestrator | 2025-04-09 08:57:38.586167 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-09 08:57:38.586771 | orchestrator | Wednesday 09 April 2025 08:57:38 +0000 (0:00:00.579) 0:00:11.453 ******* 2025-04-09 08:57:38.663117 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:57:38.693794 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:57:38.719682 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:57:38.770352 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:57:38.770467 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:57:38.770574 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:57:38.771252 | orchestrator | 2025-04-09 08:57:38.771617 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-09 08:57:38.772008 | orchestrator | Wednesday 09 April 2025 08:57:38 +0000 (0:00:00.190) 0:00:11.644 ******* 2025-04-09 08:57:39.494968 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-09 08:57:39.496100 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:39.497451 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-09 08:57:39.498181 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:39.499070 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-09 08:57:39.500443 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-09 08:57:39.501219 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:39.502539 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:39.503058 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-09 08:57:39.503910 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:39.504318 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-09 08:57:39.505080 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:39.505657 | orchestrator | 2025-04-09 08:57:39.506315 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-09 08:57:39.506689 | orchestrator | Wednesday 09 April 2025 08:57:39 +0000 (0:00:00.723) 0:00:12.367 ******* 2025-04-09 08:57:39.550100 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:57:39.577977 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:57:39.616782 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:57:39.647388 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:57:39.684969 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:57:39.686350 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:57:39.686501 | orchestrator | 2025-04-09 08:57:39.687633 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-09 08:57:39.688333 | orchestrator | Wednesday 09 April 2025 08:57:39 +0000 (0:00:00.192) 0:00:12.560 ******* 2025-04-09 08:57:39.774162 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:57:39.801835 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:57:39.826681 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:57:39.875986 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:57:39.877068 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:57:39.878450 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:57:39.878485 | orchestrator | 2025-04-09 08:57:39.878507 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-09 08:57:39.878804 | orchestrator | Wednesday 09 April 2025 08:57:39 +0000 (0:00:00.189) 0:00:12.750 ******* 2025-04-09 08:57:39.932210 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:57:39.957558 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:57:39.981231 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:57:40.018405 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:57:40.073865 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:57:40.074919 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:57:40.075751 | orchestrator | 2025-04-09 08:57:40.076373 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-09 08:57:40.076995 | orchestrator | Wednesday 09 April 2025 08:57:40 +0000 (0:00:00.198) 0:00:12.948 ******* 2025-04-09 08:57:40.780442 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:40.780840 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:40.782141 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:40.783246 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:40.784052 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:40.784866 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:40.785545 | orchestrator | 2025-04-09 08:57:40.786593 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-09 08:57:40.787247 | orchestrator | Wednesday 09 April 2025 08:57:40 +0000 (0:00:00.704) 0:00:13.652 ******* 2025-04-09 08:57:40.887215 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:57:40.909378 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:57:41.019260 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:57:41.020336 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:57:41.021983 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:57:41.023264 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:57:41.024541 | orchestrator | 2025-04-09 08:57:41.025520 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:57:41.026765 | orchestrator | 2025-04-09 08:57:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:57:41.028338 | orchestrator | 2025-04-09 08:57:41 | INFO  | Please wait and do not abort execution. 2025-04-09 08:57:41.028372 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 08:57:41.028970 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 08:57:41.030215 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 08:57:41.031044 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 08:57:41.031866 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 08:57:41.032782 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 08:57:41.034151 | orchestrator | 2025-04-09 08:57:41.034707 | orchestrator | 2025-04-09 08:57:41.035639 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 08:57:41.036433 | orchestrator | Wednesday 09 April 2025 08:57:41 +0000 (0:00:00.242) 0:00:13.895 ******* 2025-04-09 08:57:41.037419 | orchestrator | =============================================================================== 2025-04-09 08:57:41.038116 | orchestrator | Gathering Facts --------------------------------------------------------- 4.30s 2025-04-09 08:57:41.038947 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2025-04-09 08:57:41.039618 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-04-09 08:57:41.040554 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2025-04-09 08:57:41.041285 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-04-09 08:57:41.041879 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2025-04-09 08:57:41.042455 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-04-09 08:57:41.043740 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2025-04-09 08:57:41.044230 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2025-04-09 08:57:41.045014 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-04-09 08:57:41.045750 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-04-09 08:57:41.046187 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.23s 2025-04-09 08:57:41.046627 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.20s 2025-04-09 08:57:41.047079 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2025-04-09 08:57:41.047477 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-04-09 08:57:41.048083 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2025-04-09 08:57:41.048453 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-04-09 08:57:41.602898 | orchestrator | + osism apply --environment custom facts 2025-04-09 08:57:43.182702 | orchestrator | 2025-04-09 08:57:43 | INFO  | Trying to run play facts in environment custom 2025-04-09 08:57:43.247631 | orchestrator | 2025-04-09 08:57:43 | INFO  | Task 4dbbea31-855c-4824-b590-a6d497b0eca0 (facts) was prepared for execution. 2025-04-09 08:57:47.112772 | orchestrator | 2025-04-09 08:57:43 | INFO  | It takes a moment until task 4dbbea31-855c-4824-b590-a6d497b0eca0 (facts) has been started and output is visible here. 2025-04-09 08:57:47.112899 | orchestrator | 2025-04-09 08:57:47.112965 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-09 08:57:47.112990 | orchestrator | 2025-04-09 08:57:47.113169 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-09 08:57:47.114773 | orchestrator | Wednesday 09 April 2025 08:57:47 +0000 (0:00:00.094) 0:00:00.094 ******* 2025-04-09 08:57:48.478572 | orchestrator | ok: [testbed-manager] 2025-04-09 08:57:48.479702 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:48.481727 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:48.483588 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:48.484880 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:48.485758 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:48.486846 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:48.487361 | orchestrator | 2025-04-09 08:57:48.488358 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-09 08:57:48.489468 | orchestrator | Wednesday 09 April 2025 08:57:48 +0000 (0:00:01.366) 0:00:01.461 ******* 2025-04-09 08:57:49.760135 | orchestrator | ok: [testbed-manager] 2025-04-09 08:57:49.760365 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:49.765096 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:49.765394 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:57:49.765427 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:49.767597 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:57:49.768251 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:57:49.768280 | orchestrator | 2025-04-09 08:57:49.768695 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-09 08:57:49.769027 | orchestrator | 2025-04-09 08:57:49.769415 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-09 08:57:49.770108 | orchestrator | Wednesday 09 April 2025 08:57:49 +0000 (0:00:01.282) 0:00:02.744 ******* 2025-04-09 08:57:49.885321 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:49.885822 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:49.886341 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:49.886644 | orchestrator | 2025-04-09 08:57:49.887438 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-09 08:57:49.887705 | orchestrator | Wednesday 09 April 2025 08:57:49 +0000 (0:00:00.125) 0:00:02.869 ******* 2025-04-09 08:57:50.155807 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:50.156322 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:50.156661 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:50.161732 | orchestrator | 2025-04-09 08:57:50.410767 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-09 08:57:50.410913 | orchestrator | Wednesday 09 April 2025 08:57:50 +0000 (0:00:00.271) 0:00:03.141 ******* 2025-04-09 08:57:50.410954 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:50.411061 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:50.411641 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:50.412597 | orchestrator | 2025-04-09 08:57:50.413548 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-09 08:57:50.413972 | orchestrator | Wednesday 09 April 2025 08:57:50 +0000 (0:00:00.255) 0:00:03.396 ******* 2025-04-09 08:57:50.579341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:57:50.582355 | orchestrator | 2025-04-09 08:57:50.582965 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-09 08:57:50.583862 | orchestrator | Wednesday 09 April 2025 08:57:50 +0000 (0:00:00.168) 0:00:03.565 ******* 2025-04-09 08:57:51.032373 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:51.033239 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:51.034892 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:51.036148 | orchestrator | 2025-04-09 08:57:51.038066 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-09 08:57:51.038265 | orchestrator | Wednesday 09 April 2025 08:57:51 +0000 (0:00:00.450) 0:00:04.015 ******* 2025-04-09 08:57:51.144370 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:57:51.144506 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:57:51.144599 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:57:51.145158 | orchestrator | 2025-04-09 08:57:51.145591 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-09 08:57:51.146220 | orchestrator | Wednesday 09 April 2025 08:57:51 +0000 (0:00:00.115) 0:00:04.130 ******* 2025-04-09 08:57:52.176206 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:52.176416 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:52.176709 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:52.177407 | orchestrator | 2025-04-09 08:57:52.177738 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-09 08:57:52.179648 | orchestrator | Wednesday 09 April 2025 08:57:52 +0000 (0:00:01.029) 0:00:05.160 ******* 2025-04-09 08:57:52.664241 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:57:52.664908 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:57:52.666175 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:57:52.666889 | orchestrator | 2025-04-09 08:57:52.667690 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-09 08:57:52.668596 | orchestrator | Wednesday 09 April 2025 08:57:52 +0000 (0:00:00.489) 0:00:05.649 ******* 2025-04-09 08:57:53.725986 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:57:53.726858 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:57:53.726902 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:57:53.728390 | orchestrator | 2025-04-09 08:57:53.729406 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-09 08:57:53.730495 | orchestrator | Wednesday 09 April 2025 08:57:53 +0000 (0:00:01.059) 0:00:06.708 ******* 2025-04-09 08:58:07.135948 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:07.136630 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:07.136658 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:07.136682 | orchestrator | 2025-04-09 08:58:07.136698 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-09 08:58:07.137620 | orchestrator | Wednesday 09 April 2025 08:58:07 +0000 (0:00:13.407) 0:00:20.116 ******* 2025-04-09 08:58:07.214524 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:58:07.256384 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:58:07.256594 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:58:07.257174 | orchestrator | 2025-04-09 08:58:07.257913 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-09 08:58:07.258310 | orchestrator | Wednesday 09 April 2025 08:58:07 +0000 (0:00:00.127) 0:00:20.243 ******* 2025-04-09 08:58:14.477348 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:14.478754 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:14.478830 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:14.481086 | orchestrator | 2025-04-09 08:58:14.482241 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-09 08:58:14.483090 | orchestrator | Wednesday 09 April 2025 08:58:14 +0000 (0:00:07.218) 0:00:27.461 ******* 2025-04-09 08:58:14.903814 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:14.904828 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:14.906177 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:14.907001 | orchestrator | 2025-04-09 08:58:14.907721 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-09 08:58:14.908403 | orchestrator | Wednesday 09 April 2025 08:58:14 +0000 (0:00:00.428) 0:00:27.890 ******* 2025-04-09 08:58:18.457850 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-09 08:58:18.458065 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-09 08:58:18.459610 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-09 08:58:18.461993 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-09 08:58:18.462921 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-09 08:58:18.463612 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-09 08:58:18.465117 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-09 08:58:18.466263 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-09 08:58:18.466683 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-09 08:58:18.467775 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-09 08:58:18.468551 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-09 08:58:18.469490 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-09 08:58:18.469652 | orchestrator | 2025-04-09 08:58:18.470085 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-09 08:58:18.470751 | orchestrator | Wednesday 09 April 2025 08:58:18 +0000 (0:00:03.552) 0:00:31.442 ******* 2025-04-09 08:58:19.627859 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:19.629365 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:19.631111 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:19.631743 | orchestrator | 2025-04-09 08:58:19.632978 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-09 08:58:19.634171 | orchestrator | 2025-04-09 08:58:19.635566 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-09 08:58:19.635735 | orchestrator | Wednesday 09 April 2025 08:58:19 +0000 (0:00:01.170) 0:00:32.612 ******* 2025-04-09 08:58:23.490260 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:23.490528 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:23.492085 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:23.493199 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:23.494218 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:23.495398 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:23.495936 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:23.496756 | orchestrator | 2025-04-09 08:58:23.498941 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 08:58:23.499061 | orchestrator | 2025-04-09 08:58:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 08:58:23.499159 | orchestrator | 2025-04-09 08:58:23 | INFO  | Please wait and do not abort execution. 2025-04-09 08:58:23.499975 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:58:23.500645 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:58:23.502439 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:58:23.502502 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 08:58:23.502522 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 08:58:23.502906 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 08:58:23.503411 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 08:58:23.503734 | orchestrator | 2025-04-09 08:58:23.504522 | orchestrator | 2025-04-09 08:58:23.504802 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 08:58:23.505022 | orchestrator | Wednesday 09 April 2025 08:58:23 +0000 (0:00:03.863) 0:00:36.475 ******* 2025-04-09 08:58:23.505660 | orchestrator | =============================================================================== 2025-04-09 08:58:23.505815 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.41s 2025-04-09 08:58:23.506716 | orchestrator | Install required packages (Debian) -------------------------------------- 7.22s 2025-04-09 08:58:23.507094 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.86s 2025-04-09 08:58:23.507124 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2025-04-09 08:58:23.507320 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2025-04-09 08:58:23.507724 | orchestrator | Copy fact file ---------------------------------------------------------- 1.28s 2025-04-09 08:58:23.507942 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.17s 2025-04-09 08:58:23.508235 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-04-09 08:58:23.509033 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-04-09 08:58:23.509137 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2025-04-09 08:58:23.509844 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-04-09 08:58:23.510110 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-04-09 08:58:23.510488 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.27s 2025-04-09 08:58:23.510570 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.26s 2025-04-09 08:58:23.510892 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-04-09 08:58:23.511302 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-04-09 08:58:23.511608 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-04-09 08:58:23.512130 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-04-09 08:58:23.937345 | orchestrator | + osism apply bootstrap 2025-04-09 08:58:25.618380 | orchestrator | 2025-04-09 08:58:25 | INFO  | Task 1d967415-1f55-4aae-b1d9-1369e713a095 (bootstrap) was prepared for execution. 2025-04-09 08:58:29.610929 | orchestrator | 2025-04-09 08:58:25 | INFO  | It takes a moment until task 1d967415-1f55-4aae-b1d9-1369e713a095 (bootstrap) has been started and output is visible here. 2025-04-09 08:58:29.611074 | orchestrator | 2025-04-09 08:58:29.611835 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-09 08:58:29.611864 | orchestrator | 2025-04-09 08:58:29.611886 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-09 08:58:29.613494 | orchestrator | Wednesday 09 April 2025 08:58:29 +0000 (0:00:00.124) 0:00:00.124 ******* 2025-04-09 08:58:29.689714 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:29.708212 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:29.728792 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:29.782958 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:29.786846 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:29.787960 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:29.788182 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:29.788942 | orchestrator | 2025-04-09 08:58:29.790227 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-09 08:58:29.790845 | orchestrator | 2025-04-09 08:58:29.790891 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-09 08:58:29.791118 | orchestrator | Wednesday 09 April 2025 08:58:29 +0000 (0:00:00.177) 0:00:00.301 ******* 2025-04-09 08:58:33.457780 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:33.459063 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:33.459108 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:33.459404 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:33.460020 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:33.460738 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:33.461454 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:33.461991 | orchestrator | 2025-04-09 08:58:33.463318 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-09 08:58:33.463544 | orchestrator | 2025-04-09 08:58:33.463573 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-09 08:58:33.464542 | orchestrator | Wednesday 09 April 2025 08:58:33 +0000 (0:00:03.673) 0:00:03.975 ******* 2025-04-09 08:58:33.521965 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-09 08:58:33.561712 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-09 08:58:33.562143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-09 08:58:33.562431 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-09 08:58:33.604027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-09 08:58:33.605683 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-09 08:58:33.606227 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-09 08:58:33.606690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-09 08:58:33.607050 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-09 08:58:33.652654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-09 08:58:33.653394 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-09 08:58:33.653923 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-09 08:58:33.654342 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-09 08:58:33.654791 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-09 08:58:33.655846 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-09 08:58:33.919647 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-09 08:58:33.920752 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-09 08:58:33.921769 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-09 08:58:33.922595 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-09 08:58:33.922975 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-09 08:58:33.923750 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:33.925135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-09 08:58:33.926346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-09 08:58:33.926581 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-09 08:58:33.927183 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-09 08:58:33.927739 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:58:33.928428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-09 08:58:33.928706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-09 08:58:33.929150 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-09 08:58:33.929662 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:58:33.930587 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-09 08:58:33.931749 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-09 08:58:33.931957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-09 08:58:33.934110 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-09 08:58:33.934175 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-09 08:58:33.934723 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-09 08:58:33.934748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-09 08:58:33.934767 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-09 08:58:33.935323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-09 08:58:33.936136 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-09 08:58:33.936644 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-09 08:58:33.937457 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-09 08:58:33.938105 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:58:33.940727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-09 08:58:33.940936 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-09 08:58:33.941356 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-09 08:58:33.941444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-09 08:58:33.941820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-09 08:58:33.942219 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:58:33.942537 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-09 08:58:33.942971 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-09 08:58:33.943395 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:58:33.943740 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-09 08:58:33.946640 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-09 08:58:35.100407 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-09 08:58:35.100556 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:58:35.100605 | orchestrator | 2025-04-09 08:58:35.100622 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-09 08:58:35.100637 | orchestrator | 2025-04-09 08:58:35.100652 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-09 08:58:35.100665 | orchestrator | Wednesday 09 April 2025 08:58:33 +0000 (0:00:00.462) 0:00:04.437 ******* 2025-04-09 08:58:35.100696 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:35.100806 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:35.100856 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:35.100875 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:35.103847 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:35.104009 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:35.104264 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:35.104615 | orchestrator | 2025-04-09 08:58:35.104886 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-09 08:58:35.105395 | orchestrator | Wednesday 09 April 2025 08:58:35 +0000 (0:00:01.181) 0:00:05.618 ******* 2025-04-09 08:58:36.338719 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:36.338998 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:36.339867 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:36.341036 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:36.342321 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:36.345074 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:36.345750 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:36.346424 | orchestrator | 2025-04-09 08:58:36.347687 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-09 08:58:36.348370 | orchestrator | Wednesday 09 April 2025 08:58:36 +0000 (0:00:01.235) 0:00:06.853 ******* 2025-04-09 08:58:36.613236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:58:36.613624 | orchestrator | 2025-04-09 08:58:36.613668 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-09 08:58:36.614210 | orchestrator | Wednesday 09 April 2025 08:58:36 +0000 (0:00:00.273) 0:00:07.127 ******* 2025-04-09 08:58:38.708659 | orchestrator | changed: [testbed-manager] 2025-04-09 08:58:38.709219 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:38.710244 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:38.710992 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:38.712517 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:38.713797 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:38.714883 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:38.715650 | orchestrator | 2025-04-09 08:58:38.716404 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-09 08:58:38.717235 | orchestrator | Wednesday 09 April 2025 08:58:38 +0000 (0:00:02.096) 0:00:09.224 ******* 2025-04-09 08:58:38.796939 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:38.979854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:58:38.981078 | orchestrator | 2025-04-09 08:58:38.981883 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-09 08:58:38.983155 | orchestrator | Wednesday 09 April 2025 08:58:38 +0000 (0:00:00.272) 0:00:09.496 ******* 2025-04-09 08:58:39.988083 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:39.988471 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:39.989163 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:39.990417 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:39.992639 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:39.992945 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:39.994409 | orchestrator | 2025-04-09 08:58:39.995904 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-09 08:58:39.997109 | orchestrator | Wednesday 09 April 2025 08:58:39 +0000 (0:00:01.006) 0:00:10.502 ******* 2025-04-09 08:58:40.088585 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:40.638445 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:40.639403 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:40.640866 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:40.642399 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:40.643777 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:40.645043 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:40.646000 | orchestrator | 2025-04-09 08:58:40.646925 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-09 08:58:40.648024 | orchestrator | Wednesday 09 April 2025 08:58:40 +0000 (0:00:00.652) 0:00:11.155 ******* 2025-04-09 08:58:40.762790 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:58:40.785180 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:58:40.808164 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:58:41.079370 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:58:41.079521 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:58:41.080167 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:58:41.080819 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:41.081325 | orchestrator | 2025-04-09 08:58:41.081652 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-09 08:58:41.082531 | orchestrator | Wednesday 09 April 2025 08:58:41 +0000 (0:00:00.441) 0:00:11.596 ******* 2025-04-09 08:58:41.180553 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:41.201805 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:58:41.229431 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:58:41.252787 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:58:41.312949 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:58:41.313933 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:58:41.314475 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:58:41.314901 | orchestrator | 2025-04-09 08:58:41.315328 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-09 08:58:41.315946 | orchestrator | Wednesday 09 April 2025 08:58:41 +0000 (0:00:00.233) 0:00:11.830 ******* 2025-04-09 08:58:41.656814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:58:41.658011 | orchestrator | 2025-04-09 08:58:41.661399 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-09 08:58:41.966772 | orchestrator | Wednesday 09 April 2025 08:58:41 +0000 (0:00:00.343) 0:00:12.174 ******* 2025-04-09 08:58:41.966876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:58:41.969366 | orchestrator | 2025-04-09 08:58:41.969750 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-09 08:58:41.969783 | orchestrator | Wednesday 09 April 2025 08:58:41 +0000 (0:00:00.308) 0:00:12.482 ******* 2025-04-09 08:58:43.400690 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:43.401968 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:43.402796 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:43.403738 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:43.404583 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:43.405258 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:43.406159 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:43.406588 | orchestrator | 2025-04-09 08:58:43.407416 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-09 08:58:43.407886 | orchestrator | Wednesday 09 April 2025 08:58:43 +0000 (0:00:01.434) 0:00:13.916 ******* 2025-04-09 08:58:43.482910 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:43.508313 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:58:43.552838 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:58:43.567002 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:58:43.638398 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:58:43.639560 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:58:43.642158 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:58:43.642867 | orchestrator | 2025-04-09 08:58:43.643764 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-09 08:58:43.644050 | orchestrator | Wednesday 09 April 2025 08:58:43 +0000 (0:00:00.237) 0:00:14.154 ******* 2025-04-09 08:58:44.186334 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:44.187182 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:44.187542 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:44.188267 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:44.188807 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:44.189208 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:44.190115 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:44.190407 | orchestrator | 2025-04-09 08:58:44.190814 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-09 08:58:44.191533 | orchestrator | Wednesday 09 April 2025 08:58:44 +0000 (0:00:00.541) 0:00:14.696 ******* 2025-04-09 08:58:44.269058 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:44.321944 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:58:44.366973 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:58:44.440118 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:58:44.440865 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:58:44.442542 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:58:44.443169 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:58:44.444003 | orchestrator | 2025-04-09 08:58:44.444718 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-09 08:58:44.445825 | orchestrator | Wednesday 09 April 2025 08:58:44 +0000 (0:00:00.261) 0:00:14.957 ******* 2025-04-09 08:58:45.007060 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:45.007742 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:45.011333 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:45.011489 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:45.012939 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:45.013885 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:45.014855 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:45.016349 | orchestrator | 2025-04-09 08:58:45.017435 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-09 08:58:45.018417 | orchestrator | Wednesday 09 April 2025 08:58:44 +0000 (0:00:00.557) 0:00:15.514 ******* 2025-04-09 08:58:46.485686 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:46.487172 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:46.489084 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:46.490115 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:46.491017 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:46.491800 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:46.492813 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:46.493424 | orchestrator | 2025-04-09 08:58:46.494109 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-09 08:58:46.494611 | orchestrator | Wednesday 09 April 2025 08:58:46 +0000 (0:00:01.480) 0:00:16.995 ******* 2025-04-09 08:58:47.518642 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:47.520358 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:47.521076 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:47.521974 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:47.522706 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:47.523398 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:47.525693 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:47.526075 | orchestrator | 2025-04-09 08:58:47.526531 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-09 08:58:47.526965 | orchestrator | Wednesday 09 April 2025 08:58:47 +0000 (0:00:01.030) 0:00:18.025 ******* 2025-04-09 08:58:47.937709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:58:47.938835 | orchestrator | 2025-04-09 08:58:47.938870 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-09 08:58:47.938902 | orchestrator | Wednesday 09 April 2025 08:58:47 +0000 (0:00:00.423) 0:00:18.449 ******* 2025-04-09 08:58:48.046415 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:49.262754 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:49.265894 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:49.266214 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:58:49.266310 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:58:49.267420 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:49.268563 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:58:49.269356 | orchestrator | 2025-04-09 08:58:49.270119 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-09 08:58:49.271471 | orchestrator | Wednesday 09 April 2025 08:58:49 +0000 (0:00:01.327) 0:00:19.777 ******* 2025-04-09 08:58:49.365483 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:49.392785 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:49.423434 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:49.453407 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:49.541699 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:49.542963 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:49.543698 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:49.545348 | orchestrator | 2025-04-09 08:58:49.545973 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-09 08:58:49.546630 | orchestrator | Wednesday 09 April 2025 08:58:49 +0000 (0:00:00.279) 0:00:20.056 ******* 2025-04-09 08:58:49.643350 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:49.678465 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:49.712945 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:49.747145 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:49.840559 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:49.841341 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:49.842919 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:49.843844 | orchestrator | 2025-04-09 08:58:49.846419 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-09 08:58:49.847134 | orchestrator | Wednesday 09 April 2025 08:58:49 +0000 (0:00:00.300) 0:00:20.357 ******* 2025-04-09 08:58:49.940220 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:49.976423 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:50.013067 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:50.047001 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:50.139604 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:50.141229 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:50.142254 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:50.143534 | orchestrator | 2025-04-09 08:58:50.144128 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-09 08:58:50.147181 | orchestrator | Wednesday 09 April 2025 08:58:50 +0000 (0:00:00.297) 0:00:20.655 ******* 2025-04-09 08:58:50.470599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:58:50.472842 | orchestrator | 2025-04-09 08:58:50.472878 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-09 08:58:50.475395 | orchestrator | Wednesday 09 April 2025 08:58:50 +0000 (0:00:00.332) 0:00:20.987 ******* 2025-04-09 08:58:51.005996 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:51.006223 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:51.007586 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:51.009152 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:51.009321 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:51.010232 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:51.010911 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:51.011974 | orchestrator | 2025-04-09 08:58:51.012815 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-09 08:58:51.013420 | orchestrator | Wednesday 09 April 2025 08:58:50 +0000 (0:00:00.532) 0:00:21.519 ******* 2025-04-09 08:58:51.110752 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:58:51.149384 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:58:51.176218 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:58:51.213115 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:58:51.294211 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:58:51.294614 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:58:51.294637 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:58:51.295379 | orchestrator | 2025-04-09 08:58:51.295612 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-09 08:58:52.349146 | orchestrator | Wednesday 09 April 2025 08:58:51 +0000 (0:00:00.291) 0:00:21.811 ******* 2025-04-09 08:58:52.349339 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:52.350203 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:52.350247 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:52.351052 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:52.352414 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:52.352875 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:52.353782 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:52.354590 | orchestrator | 2025-04-09 08:58:52.355327 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-09 08:58:52.356558 | orchestrator | Wednesday 09 April 2025 08:58:52 +0000 (0:00:01.051) 0:00:22.863 ******* 2025-04-09 08:58:52.930766 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:58:52.931532 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:52.935226 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:58:52.935850 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:58:52.936624 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:52.937317 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:52.937999 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:52.938767 | orchestrator | 2025-04-09 08:58:52.939538 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-09 08:58:52.940244 | orchestrator | Wednesday 09 April 2025 08:58:52 +0000 (0:00:00.583) 0:00:23.446 ******* 2025-04-09 08:58:54.090919 | orchestrator | ok: [testbed-manager] 2025-04-09 08:58:54.091853 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:58:54.092055 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:58:54.092584 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:58:54.092844 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:58:54.093399 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:58:54.093698 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:58:54.095265 | orchestrator | 2025-04-09 08:58:54.095685 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-09 08:58:54.096135 | orchestrator | Wednesday 09 April 2025 08:58:54 +0000 (0:00:01.157) 0:00:24.604 ******* 2025-04-09 08:59:07.659667 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:07.660990 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:07.661623 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:07.661658 | orchestrator | changed: [testbed-manager] 2025-04-09 08:59:07.662720 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:59:07.663512 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:59:07.664192 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:59:07.665367 | orchestrator | 2025-04-09 08:59:07.665982 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-09 08:59:07.666056 | orchestrator | Wednesday 09 April 2025 08:59:07 +0000 (0:00:13.565) 0:00:38.170 ******* 2025-04-09 08:59:07.742356 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:07.769743 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:07.798319 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:07.825789 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:07.887478 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:07.888725 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:07.890666 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:07.891823 | orchestrator | 2025-04-09 08:59:07.892956 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-09 08:59:07.895034 | orchestrator | Wednesday 09 April 2025 08:59:07 +0000 (0:00:00.231) 0:00:38.401 ******* 2025-04-09 08:59:07.963179 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:07.997758 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:08.036904 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:08.069256 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:08.142649 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:08.147160 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:08.147791 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:08.147816 | orchestrator | 2025-04-09 08:59:08.147836 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-09 08:59:08.148628 | orchestrator | Wednesday 09 April 2025 08:59:08 +0000 (0:00:00.257) 0:00:38.659 ******* 2025-04-09 08:59:08.226709 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:08.255868 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:08.283374 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:08.321758 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:08.408440 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:08.409094 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:08.409918 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:08.410597 | orchestrator | 2025-04-09 08:59:08.410888 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-09 08:59:08.411400 | orchestrator | Wednesday 09 April 2025 08:59:08 +0000 (0:00:00.265) 0:00:38.924 ******* 2025-04-09 08:59:08.747690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:59:08.750001 | orchestrator | 2025-04-09 08:59:08.750332 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-09 08:59:08.750429 | orchestrator | Wednesday 09 April 2025 08:59:08 +0000 (0:00:00.334) 0:00:39.258 ******* 2025-04-09 08:59:10.404131 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:10.405226 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:10.405598 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:10.407619 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:10.408747 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:10.409584 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:10.411180 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:10.411815 | orchestrator | 2025-04-09 08:59:10.413025 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-09 08:59:10.413482 | orchestrator | Wednesday 09 April 2025 08:59:10 +0000 (0:00:01.662) 0:00:40.921 ******* 2025-04-09 08:59:11.547804 | orchestrator | changed: [testbed-manager] 2025-04-09 08:59:11.548502 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:59:11.548539 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:59:11.548564 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:59:11.549232 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:59:11.550012 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:59:11.550946 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:59:11.551220 | orchestrator | 2025-04-09 08:59:11.552079 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-09 08:59:11.552722 | orchestrator | Wednesday 09 April 2025 08:59:11 +0000 (0:00:01.138) 0:00:42.059 ******* 2025-04-09 08:59:12.346762 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:12.347831 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:12.348579 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:12.349534 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:12.351118 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:12.352097 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:12.353391 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:12.354377 | orchestrator | 2025-04-09 08:59:12.355603 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-09 08:59:12.356294 | orchestrator | Wednesday 09 April 2025 08:59:12 +0000 (0:00:00.803) 0:00:42.863 ******* 2025-04-09 08:59:12.690792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:59:12.690958 | orchestrator | 2025-04-09 08:59:12.691699 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-09 08:59:12.692386 | orchestrator | Wednesday 09 April 2025 08:59:12 +0000 (0:00:00.341) 0:00:43.205 ******* 2025-04-09 08:59:13.756551 | orchestrator | changed: [testbed-manager] 2025-04-09 08:59:13.760814 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:59:13.760934 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:59:13.760960 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:59:13.762254 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:59:13.762311 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:59:13.765600 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:59:13.766548 | orchestrator | 2025-04-09 08:59:13.766971 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-09 08:59:13.767452 | orchestrator | Wednesday 09 April 2025 08:59:13 +0000 (0:00:01.066) 0:00:44.271 ******* 2025-04-09 08:59:13.863994 | orchestrator | skipping: [testbed-manager] 2025-04-09 08:59:13.892407 | orchestrator | skipping: [testbed-node-0] 2025-04-09 08:59:13.919390 | orchestrator | skipping: [testbed-node-1] 2025-04-09 08:59:14.085000 | orchestrator | skipping: [testbed-node-2] 2025-04-09 08:59:14.085632 | orchestrator | skipping: [testbed-node-3] 2025-04-09 08:59:14.086626 | orchestrator | skipping: [testbed-node-4] 2025-04-09 08:59:14.090119 | orchestrator | skipping: [testbed-node-5] 2025-04-09 08:59:26.555615 | orchestrator | 2025-04-09 08:59:26.555747 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-09 08:59:26.555768 | orchestrator | Wednesday 09 April 2025 08:59:14 +0000 (0:00:00.330) 0:00:44.602 ******* 2025-04-09 08:59:26.555799 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:59:26.555906 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:59:26.555927 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:59:26.555947 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:59:26.556002 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:59:26.556509 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:59:26.556830 | orchestrator | changed: [testbed-manager] 2025-04-09 08:59:26.557335 | orchestrator | 2025-04-09 08:59:26.557658 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-09 08:59:26.558654 | orchestrator | Wednesday 09 April 2025 08:59:26 +0000 (0:00:12.463) 0:00:57.065 ******* 2025-04-09 08:59:27.687757 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:27.688232 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:27.689567 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:27.690451 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:27.692383 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:27.693191 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:27.693219 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:27.693243 | orchestrator | 2025-04-09 08:59:27.693594 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-09 08:59:27.694461 | orchestrator | Wednesday 09 April 2025 08:59:27 +0000 (0:00:01.133) 0:00:58.199 ******* 2025-04-09 08:59:28.595359 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:28.597601 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:28.598116 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:28.598164 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:28.598187 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:28.598947 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:28.600089 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:28.601044 | orchestrator | 2025-04-09 08:59:28.601976 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-09 08:59:28.603022 | orchestrator | Wednesday 09 April 2025 08:59:28 +0000 (0:00:00.910) 0:00:59.109 ******* 2025-04-09 08:59:28.678518 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:28.712404 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:28.744404 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:28.768057 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:28.842849 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:28.843525 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:28.844903 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:28.845595 | orchestrator | 2025-04-09 08:59:28.846155 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-09 08:59:28.847049 | orchestrator | Wednesday 09 April 2025 08:59:28 +0000 (0:00:00.250) 0:00:59.360 ******* 2025-04-09 08:59:28.929990 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:28.960734 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:28.991797 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:29.034497 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:29.124519 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:29.125811 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:29.125858 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:29.126773 | orchestrator | 2025-04-09 08:59:29.127908 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-09 08:59:29.128321 | orchestrator | Wednesday 09 April 2025 08:59:29 +0000 (0:00:00.280) 0:00:59.640 ******* 2025-04-09 08:59:29.431698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 08:59:29.432471 | orchestrator | 2025-04-09 08:59:29.433543 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-09 08:59:29.434670 | orchestrator | Wednesday 09 April 2025 08:59:29 +0000 (0:00:00.307) 0:00:59.948 ******* 2025-04-09 08:59:30.966953 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:30.968057 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:30.968814 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:30.969769 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:30.972141 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:30.972547 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:30.972576 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:30.972922 | orchestrator | 2025-04-09 08:59:30.973219 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-09 08:59:30.973970 | orchestrator | Wednesday 09 April 2025 08:59:30 +0000 (0:00:01.532) 0:01:01.481 ******* 2025-04-09 08:59:31.545093 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:59:31.546778 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:59:31.547614 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:59:31.548805 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:59:31.549513 | orchestrator | changed: [testbed-manager] 2025-04-09 08:59:31.550405 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:59:31.551802 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:59:31.552940 | orchestrator | 2025-04-09 08:59:31.555092 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-09 08:59:31.646923 | orchestrator | Wednesday 09 April 2025 08:59:31 +0000 (0:00:00.576) 0:01:02.058 ******* 2025-04-09 08:59:31.646975 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:31.682199 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:31.707308 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:31.741464 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:31.817515 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:31.818326 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:31.821245 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:31.821397 | orchestrator | 2025-04-09 08:59:31.821420 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-09 08:59:31.821440 | orchestrator | Wednesday 09 April 2025 08:59:31 +0000 (0:00:00.275) 0:01:02.334 ******* 2025-04-09 08:59:33.191722 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:33.191890 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:33.191913 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:33.191928 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:33.191942 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:33.191956 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:33.191971 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:33.191990 | orchestrator | 2025-04-09 08:59:33.193720 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-09 08:59:33.195132 | orchestrator | Wednesday 09 April 2025 08:59:33 +0000 (0:00:01.368) 0:01:03.702 ******* 2025-04-09 08:59:34.799774 | orchestrator | changed: [testbed-node-1] 2025-04-09 08:59:34.801958 | orchestrator | changed: [testbed-node-2] 2025-04-09 08:59:34.802142 | orchestrator | changed: [testbed-node-4] 2025-04-09 08:59:34.802902 | orchestrator | changed: [testbed-node-5] 2025-04-09 08:59:34.803782 | orchestrator | changed: [testbed-node-3] 2025-04-09 08:59:34.804957 | orchestrator | changed: [testbed-manager] 2025-04-09 08:59:34.807255 | orchestrator | ok: [testbed-node-0] 2025-04-09 08:59:34.808239 | orchestrator | 2025-04-09 08:59:34.808796 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-09 08:59:34.809731 | orchestrator | Wednesday 09 April 2025 08:59:34 +0000 (0:00:01.607) 0:01:05.310 ******* 2025-04-09 08:59:41.135080 | orchestrator | ok: [testbed-node-1] 2025-04-09 08:59:41.135340 | orchestrator | ok: [testbed-node-2] 2025-04-09 08:59:41.136450 | orchestrator | ok: [testbed-node-4] 2025-04-09 08:59:41.136962 | orchestrator | ok: [testbed-manager] 2025-04-09 08:59:41.137846 | orchestrator | ok: [testbed-node-3] 2025-04-09 08:59:41.138403 | orchestrator | ok: [testbed-node-5] 2025-04-09 08:59:41.139390 | orchestrator | changed: [testbed-node-0] 2025-04-09 08:59:41.140201 | orchestrator | 2025-04-09 08:59:41.140584 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-09 08:59:41.141301 | orchestrator | Wednesday 09 April 2025 08:59:41 +0000 (0:00:06.340) 0:01:11.650 ******* 2025-04-09 09:00:20.767197 | orchestrator | ok: [testbed-manager] 2025-04-09 09:00:20.768559 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:00:20.768595 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:00:20.768608 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:00:20.768628 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:00:20.769396 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:00:20.770063 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:00:20.771098 | orchestrator | 2025-04-09 09:00:20.771692 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-09 09:00:20.772425 | orchestrator | Wednesday 09 April 2025 09:00:20 +0000 (0:00:39.625) 0:01:51.276 ******* 2025-04-09 09:01:37.270199 | orchestrator | changed: [testbed-manager] 2025-04-09 09:01:37.270636 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:01:37.270746 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:01:37.270782 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:01:37.271130 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:01:37.272356 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:01:37.273166 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:01:37.274518 | orchestrator | 2025-04-09 09:01:37.275414 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-09 09:01:37.276184 | orchestrator | Wednesday 09 April 2025 09:01:37 +0000 (0:01:16.506) 0:03:07.782 ******* 2025-04-09 09:01:38.996679 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:01:38.997905 | orchestrator | ok: [testbed-manager] 2025-04-09 09:01:38.997946 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:01:38.999480 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:01:39.003403 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:01:39.004377 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:01:39.004405 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:01:39.004425 | orchestrator | 2025-04-09 09:01:39.004446 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-09 09:01:39.004971 | orchestrator | Wednesday 09 April 2025 09:01:38 +0000 (0:00:01.729) 0:03:09.511 ******* 2025-04-09 09:01:52.928119 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:01:52.928445 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:01:52.928480 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:01:52.928497 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:01:52.928551 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:01:52.928610 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:01:52.928630 | orchestrator | changed: [testbed-manager] 2025-04-09 09:01:52.929262 | orchestrator | 2025-04-09 09:01:52.930412 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-09 09:01:52.931057 | orchestrator | Wednesday 09 April 2025 09:01:52 +0000 (0:00:13.924) 0:03:23.436 ******* 2025-04-09 09:01:53.379038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-09 09:01:53.379978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-09 09:01:53.380422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-09 09:01:53.381294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-09 09:01:53.382951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-09 09:01:53.384674 | orchestrator | 2025-04-09 09:01:53.388107 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-09 09:01:53.392761 | orchestrator | Wednesday 09 April 2025 09:01:53 +0000 (0:00:00.455) 0:03:23.892 ******* 2025-04-09 09:01:53.459432 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-09 09:01:53.501487 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:01:53.579850 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-09 09:01:54.086124 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:01:54.086652 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-09 09:01:54.087438 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:01:54.088511 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-09 09:01:54.089096 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:01:54.089776 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-09 09:01:54.090420 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-09 09:01:54.091098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-09 09:01:54.092037 | orchestrator | 2025-04-09 09:01:54.093103 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-09 09:01:54.095154 | orchestrator | Wednesday 09 April 2025 09:01:54 +0000 (0:00:00.709) 0:03:24.601 ******* 2025-04-09 09:01:54.191528 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-09 09:01:54.192358 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-09 09:01:54.192396 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-09 09:01:54.193471 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-09 09:01:54.194178 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-09 09:01:54.195322 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-09 09:01:54.196210 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-09 09:01:54.196799 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-09 09:01:54.197655 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-09 09:01:54.198482 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-09 09:01:54.222384 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:01:54.282601 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-09 09:01:54.326837 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-09 09:01:54.328062 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-09 09:01:54.328900 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-09 09:01:54.329387 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-09 09:01:54.330148 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-09 09:01:54.330665 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-09 09:01:59.573705 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-09 09:01:59.573889 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-09 09:01:59.574210 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-09 09:01:59.575285 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-09 09:01:59.575547 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-09 09:01:59.580803 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-09 09:01:59.581411 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-09 09:01:59.581435 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-09 09:01:59.581446 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:01:59.581458 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-09 09:01:59.581468 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-09 09:01:59.581479 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-09 09:01:59.581494 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-09 09:01:59.582783 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-09 09:01:59.583251 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:01:59.584251 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-09 09:01:59.585030 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-09 09:01:59.585835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-09 09:01:59.585964 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-09 09:01:59.586800 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-09 09:01:59.587599 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-09 09:01:59.588362 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-09 09:01:59.589444 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-09 09:01:59.590248 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-09 09:01:59.591190 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-09 09:01:59.591612 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:01:59.592583 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-09 09:01:59.593376 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-09 09:01:59.594470 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-09 09:01:59.594925 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-09 09:01:59.595836 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-09 09:01:59.596147 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-09 09:01:59.596962 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-09 09:01:59.597537 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-09 09:01:59.597946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-09 09:01:59.598342 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-09 09:01:59.599250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-09 09:01:59.599489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-09 09:01:59.600426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-09 09:01:59.600866 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-09 09:01:59.601500 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-09 09:01:59.602403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-09 09:01:59.603047 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-09 09:01:59.603909 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-09 09:01:59.604415 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-09 09:01:59.605320 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-09 09:01:59.606130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-09 09:01:59.606444 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-09 09:01:59.607668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-09 09:01:59.608372 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-09 09:01:59.610361 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-09 09:01:59.611081 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-09 09:01:59.611256 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-09 09:01:59.611892 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-09 09:01:59.612494 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-09 09:01:59.613104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-09 09:01:59.613657 | orchestrator | 2025-04-09 09:01:59.614434 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-09 09:01:59.614849 | orchestrator | Wednesday 09 April 2025 09:01:59 +0000 (0:00:05.488) 0:03:30.089 ******* 2025-04-09 09:02:00.222768 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-09 09:02:00.223064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-09 09:02:00.224346 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-09 09:02:00.224421 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-09 09:02:00.224970 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-09 09:02:00.225834 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-09 09:02:00.226991 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-09 09:02:00.227070 | orchestrator | 2025-04-09 09:02:00.227548 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-09 09:02:00.228196 | orchestrator | Wednesday 09 April 2025 09:02:00 +0000 (0:00:00.649) 0:03:30.739 ******* 2025-04-09 09:02:00.284002 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-09 09:02:00.330465 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:02:00.331191 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-09 09:02:00.332195 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-09 09:02:00.362736 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:02:00.399345 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-09 09:02:00.400175 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:02:00.426451 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:02:00.846347 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-09 09:02:00.847254 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-09 09:02:00.847288 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-09 09:02:00.847647 | orchestrator | 2025-04-09 09:02:00.849283 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-09 09:02:00.849459 | orchestrator | Wednesday 09 April 2025 09:02:00 +0000 (0:00:00.618) 0:03:31.358 ******* 2025-04-09 09:02:00.905705 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-09 09:02:00.947872 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-09 09:02:00.948007 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:02:00.948464 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-09 09:02:00.975359 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:02:01.011632 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:02:01.041001 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-09 09:02:01.041076 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:02:03.476958 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-09 09:02:03.477858 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-09 09:02:03.478943 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-09 09:02:03.480914 | orchestrator | 2025-04-09 09:02:03.481739 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-09 09:02:03.482102 | orchestrator | Wednesday 09 April 2025 09:02:03 +0000 (0:00:02.634) 0:03:33.992 ******* 2025-04-09 09:02:03.576295 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:02:03.605664 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:02:03.634936 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:02:03.665755 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:02:03.797907 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:02:03.798677 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:02:03.799907 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:02:03.800730 | orchestrator | 2025-04-09 09:02:03.804682 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-09 09:02:03.805421 | orchestrator | Wednesday 09 April 2025 09:02:03 +0000 (0:00:00.323) 0:03:34.315 ******* 2025-04-09 09:02:09.493600 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:09.494311 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:09.495056 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:09.496487 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:09.497256 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:09.498238 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:09.499412 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:09.499810 | orchestrator | 2025-04-09 09:02:09.500739 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-09 09:02:09.501793 | orchestrator | Wednesday 09 April 2025 09:02:09 +0000 (0:00:05.694) 0:03:40.010 ******* 2025-04-09 09:02:09.577067 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-09 09:02:09.577172 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-09 09:02:09.613781 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:02:09.652723 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:02:09.653503 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-09 09:02:09.689860 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-09 09:02:09.691146 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:02:09.728591 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:02:09.729551 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-09 09:02:09.730369 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-09 09:02:09.804007 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:02:09.804758 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:02:09.805721 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-09 09:02:09.806758 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:02:09.807087 | orchestrator | 2025-04-09 09:02:09.807895 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-09 09:02:09.808708 | orchestrator | Wednesday 09 April 2025 09:02:09 +0000 (0:00:00.310) 0:03:40.320 ******* 2025-04-09 09:02:10.870490 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-09 09:02:10.871781 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-09 09:02:10.872843 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-09 09:02:10.874263 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-09 09:02:10.875787 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-09 09:02:10.876677 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-09 09:02:10.877306 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-09 09:02:10.878739 | orchestrator | 2025-04-09 09:02:10.878842 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-09 09:02:10.879484 | orchestrator | Wednesday 09 April 2025 09:02:10 +0000 (0:00:01.064) 0:03:41.385 ******* 2025-04-09 09:02:11.415819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:02:11.416145 | orchestrator | 2025-04-09 09:02:11.417948 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-09 09:02:11.419096 | orchestrator | Wednesday 09 April 2025 09:02:11 +0000 (0:00:00.545) 0:03:41.930 ******* 2025-04-09 09:02:12.776266 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:12.776703 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:12.778519 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:12.779812 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:12.781153 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:12.782265 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:12.783467 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:12.784258 | orchestrator | 2025-04-09 09:02:12.785210 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-09 09:02:12.786153 | orchestrator | Wednesday 09 April 2025 09:02:12 +0000 (0:00:01.362) 0:03:43.292 ******* 2025-04-09 09:02:13.409208 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:13.410325 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:13.411787 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:13.413573 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:13.413734 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:13.414800 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:13.415747 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:13.416733 | orchestrator | 2025-04-09 09:02:13.417002 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-09 09:02:13.417766 | orchestrator | Wednesday 09 April 2025 09:02:13 +0000 (0:00:00.629) 0:03:43.922 ******* 2025-04-09 09:02:14.063654 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:14.064549 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:14.064589 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:14.065462 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:14.065719 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:14.068282 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:14.068679 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:14.068790 | orchestrator | 2025-04-09 09:02:14.069411 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-09 09:02:14.070090 | orchestrator | Wednesday 09 April 2025 09:02:14 +0000 (0:00:00.658) 0:03:44.580 ******* 2025-04-09 09:02:14.706709 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:14.708647 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:14.710086 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:14.710440 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:14.711803 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:14.712724 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:14.712758 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:14.713740 | orchestrator | 2025-04-09 09:02:14.714169 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-09 09:02:14.715181 | orchestrator | Wednesday 09 April 2025 09:02:14 +0000 (0:00:00.641) 0:03:45.221 ******* 2025-04-09 09:02:15.703202 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744187539.4527364, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.709730 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744187553.3669186, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.710502 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744187544.1481435, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.710535 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744187553.2373056, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.710554 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744187555.4851158, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.710579 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744187556.238169, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.711187 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744187548.3648467, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.711813 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744187562.575975, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.712670 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744187484.782577, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.714087 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744187476.932517, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.714747 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744187488.3832438, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.714779 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744187486.0599496, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.715122 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744187484.7260792, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.715550 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744187485.2255268, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-09 09:02:15.716045 | orchestrator | 2025-04-09 09:02:15.716528 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-09 09:02:15.717332 | orchestrator | Wednesday 09 April 2025 09:02:15 +0000 (0:00:00.996) 0:03:46.218 ******* 2025-04-09 09:02:16.886555 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:16.887313 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:16.888083 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:16.891502 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:16.892851 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:16.893049 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:16.894098 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:16.894290 | orchestrator | 2025-04-09 09:02:16.895495 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-09 09:02:16.895734 | orchestrator | Wednesday 09 April 2025 09:02:16 +0000 (0:00:01.184) 0:03:47.403 ******* 2025-04-09 09:02:18.092000 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:18.092257 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:18.093260 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:18.093293 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:18.093832 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:18.094216 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:18.094588 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:18.094951 | orchestrator | 2025-04-09 09:02:18.095322 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-04-09 09:02:18.096267 | orchestrator | Wednesday 09 April 2025 09:02:18 +0000 (0:00:01.205) 0:03:48.609 ******* 2025-04-09 09:02:19.292692 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:19.294153 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:19.295619 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:19.296428 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:19.297371 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:19.298503 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:19.300269 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:19.301440 | orchestrator | 2025-04-09 09:02:19.302265 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-09 09:02:19.302976 | orchestrator | Wednesday 09 April 2025 09:02:19 +0000 (0:00:01.199) 0:03:49.808 ******* 2025-04-09 09:02:19.362464 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:02:19.397508 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:02:19.431822 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:02:19.486276 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:02:19.530483 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:02:19.611662 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:02:19.612588 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:02:19.613945 | orchestrator | 2025-04-09 09:02:19.614333 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-09 09:02:19.615559 | orchestrator | Wednesday 09 April 2025 09:02:19 +0000 (0:00:00.318) 0:03:50.126 ******* 2025-04-09 09:02:20.399758 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:20.400497 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:20.403094 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:20.405481 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:20.405509 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:20.406271 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:20.406299 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:20.406318 | orchestrator | 2025-04-09 09:02:20.407170 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-09 09:02:20.408202 | orchestrator | Wednesday 09 April 2025 09:02:20 +0000 (0:00:00.787) 0:03:50.914 ******* 2025-04-09 09:02:20.810421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:02:20.811201 | orchestrator | 2025-04-09 09:02:20.811825 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-09 09:02:20.812537 | orchestrator | Wednesday 09 April 2025 09:02:20 +0000 (0:00:00.413) 0:03:51.328 ******* 2025-04-09 09:02:28.474644 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:28.474828 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:28.477331 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:28.478132 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:28.479447 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:28.481399 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:28.482258 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:28.483710 | orchestrator | 2025-04-09 09:02:28.484212 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-09 09:02:28.485425 | orchestrator | Wednesday 09 April 2025 09:02:28 +0000 (0:00:07.660) 0:03:58.988 ******* 2025-04-09 09:02:29.646551 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:29.647403 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:29.648663 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:29.650105 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:29.650992 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:29.652076 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:29.653255 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:29.654147 | orchestrator | 2025-04-09 09:02:29.654936 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-09 09:02:29.655877 | orchestrator | Wednesday 09 April 2025 09:02:29 +0000 (0:00:01.173) 0:04:00.162 ******* 2025-04-09 09:02:30.728138 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:30.728375 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:30.729668 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:30.731998 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:30.732110 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:30.732961 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:30.733780 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:30.735097 | orchestrator | 2025-04-09 09:02:30.735550 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-09 09:02:30.736129 | orchestrator | Wednesday 09 April 2025 09:02:30 +0000 (0:00:01.081) 0:04:01.243 ******* 2025-04-09 09:02:31.315357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:02:31.315999 | orchestrator | 2025-04-09 09:02:31.316042 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-09 09:02:31.316949 | orchestrator | Wednesday 09 April 2025 09:02:31 +0000 (0:00:00.589) 0:04:01.832 ******* 2025-04-09 09:02:40.152368 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:40.152549 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:40.152580 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:40.152838 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:40.153510 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:40.154363 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:40.155238 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:40.156751 | orchestrator | 2025-04-09 09:02:40.788843 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-09 09:02:40.788963 | orchestrator | Wednesday 09 April 2025 09:02:40 +0000 (0:00:08.834) 0:04:10.666 ******* 2025-04-09 09:02:40.789002 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:40.789175 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:40.789295 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:40.790235 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:40.791449 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:40.792030 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:40.792415 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:40.793137 | orchestrator | 2025-04-09 09:02:40.793581 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-09 09:02:40.794060 | orchestrator | Wednesday 09 April 2025 09:02:40 +0000 (0:00:00.637) 0:04:11.304 ******* 2025-04-09 09:02:41.910458 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:41.911165 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:41.912394 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:41.913389 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:41.914498 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:41.914804 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:41.915818 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:41.916351 | orchestrator | 2025-04-09 09:02:41.916985 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-09 09:02:41.917675 | orchestrator | Wednesday 09 April 2025 09:02:41 +0000 (0:00:01.121) 0:04:12.426 ******* 2025-04-09 09:02:43.016565 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:02:43.017379 | orchestrator | changed: [testbed-manager] 2025-04-09 09:02:43.018931 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:02:43.019955 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:02:43.021045 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:02:43.022257 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:02:43.022679 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:02:43.023578 | orchestrator | 2025-04-09 09:02:43.024646 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-09 09:02:43.024918 | orchestrator | Wednesday 09 April 2025 09:02:43 +0000 (0:00:01.105) 0:04:13.531 ******* 2025-04-09 09:02:43.147526 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:43.187555 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:43.228281 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:43.267137 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:43.336281 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:43.337386 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:43.338150 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:43.339057 | orchestrator | 2025-04-09 09:02:43.340520 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-09 09:02:43.340939 | orchestrator | Wednesday 09 April 2025 09:02:43 +0000 (0:00:00.321) 0:04:13.853 ******* 2025-04-09 09:02:43.484295 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:43.524051 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:43.577620 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:43.611480 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:43.704643 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:43.705931 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:43.706726 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:43.707512 | orchestrator | 2025-04-09 09:02:43.708457 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-09 09:02:43.709290 | orchestrator | Wednesday 09 April 2025 09:02:43 +0000 (0:00:00.368) 0:04:14.221 ******* 2025-04-09 09:02:43.821203 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:43.861625 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:43.900639 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:43.942446 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:44.044895 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:44.045479 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:44.046359 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:44.047583 | orchestrator | 2025-04-09 09:02:44.048621 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-09 09:02:44.049603 | orchestrator | Wednesday 09 April 2025 09:02:44 +0000 (0:00:00.341) 0:04:14.562 ******* 2025-04-09 09:02:49.816755 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:02:49.816936 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:02:49.817973 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:02:49.818623 | orchestrator | ok: [testbed-manager] 2025-04-09 09:02:49.819654 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:02:49.820336 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:02:49.821097 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:02:49.821714 | orchestrator | 2025-04-09 09:02:49.822383 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-09 09:02:49.823714 | orchestrator | Wednesday 09 April 2025 09:02:49 +0000 (0:00:05.771) 0:04:20.334 ******* 2025-04-09 09:02:50.249565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:02:50.250196 | orchestrator | 2025-04-09 09:02:50.251298 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-09 09:02:50.252536 | orchestrator | Wednesday 09 April 2025 09:02:50 +0000 (0:00:00.431) 0:04:20.766 ******* 2025-04-09 09:02:50.350327 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-09 09:02:50.350627 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-09 09:02:50.351033 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-09 09:02:50.408633 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-09 09:02:50.409117 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:02:50.409286 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-09 09:02:50.409962 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-09 09:02:50.456684 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:02:50.458817 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-09 09:02:50.460516 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-09 09:02:50.544193 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:02:50.544565 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-09 09:02:50.545279 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-09 09:02:50.582461 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:02:50.713958 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-09 09:02:50.714494 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:02:50.714530 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-09 09:02:50.715303 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:02:50.715752 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-09 09:02:50.716526 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-09 09:02:50.717005 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:02:50.717473 | orchestrator | 2025-04-09 09:02:50.717925 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-09 09:02:50.719385 | orchestrator | Wednesday 09 April 2025 09:02:50 +0000 (0:00:00.465) 0:04:21.231 ******* 2025-04-09 09:02:51.181379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:02:51.181990 | orchestrator | 2025-04-09 09:02:51.182942 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-09 09:02:51.183652 | orchestrator | Wednesday 09 April 2025 09:02:51 +0000 (0:00:00.465) 0:04:21.696 ******* 2025-04-09 09:02:51.268841 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-09 09:02:51.270690 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-09 09:02:51.313492 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:02:51.356935 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:02:51.357585 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-09 09:02:51.357625 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-09 09:02:51.403654 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:02:51.404512 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-09 09:02:51.443312 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:02:51.546776 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:02:51.548183 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-09 09:02:51.548880 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:02:51.550103 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-09 09:02:51.550814 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:02:51.550996 | orchestrator | 2025-04-09 09:02:51.552260 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-09 09:02:51.552619 | orchestrator | Wednesday 09 April 2025 09:02:51 +0000 (0:00:00.366) 0:04:22.063 ******* 2025-04-09 09:02:52.186523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:02:52.186763 | orchestrator | 2025-04-09 09:02:52.187346 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-09 09:02:52.187838 | orchestrator | Wednesday 09 April 2025 09:02:52 +0000 (0:00:00.639) 0:04:22.702 ******* 2025-04-09 09:03:26.155691 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:03:26.155970 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:03:26.155999 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:03:26.156013 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:03:26.156025 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:03:26.156038 | orchestrator | changed: [testbed-manager] 2025-04-09 09:03:26.156056 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:03:26.156795 | orchestrator | 2025-04-09 09:03:26.158625 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-09 09:03:26.159180 | orchestrator | Wednesday 09 April 2025 09:03:26 +0000 (0:00:33.961) 0:04:56.664 ******* 2025-04-09 09:03:34.159972 | orchestrator | changed: [testbed-manager] 2025-04-09 09:03:34.160672 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:03:34.161305 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:03:34.162357 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:03:34.164047 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:03:34.164832 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:03:34.165555 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:03:34.166105 | orchestrator | 2025-04-09 09:03:34.166732 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-09 09:03:34.167181 | orchestrator | Wednesday 09 April 2025 09:03:34 +0000 (0:00:08.009) 0:05:04.674 ******* 2025-04-09 09:03:41.546437 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:03:41.546907 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:03:41.546949 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:03:41.547654 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:03:41.549056 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:03:41.551479 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:03:41.552306 | orchestrator | changed: [testbed-manager] 2025-04-09 09:03:41.553247 | orchestrator | 2025-04-09 09:03:41.554161 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-09 09:03:41.554974 | orchestrator | Wednesday 09 April 2025 09:03:41 +0000 (0:00:07.386) 0:05:12.061 ******* 2025-04-09 09:03:43.150783 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:03:43.152401 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:03:43.154656 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:03:43.155413 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:03:43.158301 | orchestrator | ok: [testbed-manager] 2025-04-09 09:03:43.159226 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:03:43.160175 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:03:43.161359 | orchestrator | 2025-04-09 09:03:43.162397 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-09 09:03:43.163406 | orchestrator | Wednesday 09 April 2025 09:03:43 +0000 (0:00:01.578) 0:05:13.640 ******* 2025-04-09 09:03:48.651922 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:03:48.652402 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:03:48.654875 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:03:48.657891 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:03:48.658256 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:03:48.658676 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:03:48.659803 | orchestrator | changed: [testbed-manager] 2025-04-09 09:03:48.659903 | orchestrator | 2025-04-09 09:03:48.660220 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-09 09:03:48.660607 | orchestrator | Wednesday 09 April 2025 09:03:48 +0000 (0:00:05.525) 0:05:19.165 ******* 2025-04-09 09:03:49.108567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:03:49.108977 | orchestrator | 2025-04-09 09:03:49.110133 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-09 09:03:49.873607 | orchestrator | Wednesday 09 April 2025 09:03:49 +0000 (0:00:00.460) 0:05:19.625 ******* 2025-04-09 09:03:49.873720 | orchestrator | changed: [testbed-manager] 2025-04-09 09:03:49.874136 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:03:49.875083 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:03:49.876096 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:03:49.876852 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:03:49.877535 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:03:49.878156 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:03:49.878678 | orchestrator | 2025-04-09 09:03:49.880103 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-09 09:03:49.880547 | orchestrator | Wednesday 09 April 2025 09:03:49 +0000 (0:00:00.762) 0:05:20.387 ******* 2025-04-09 09:03:51.441749 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:03:51.442154 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:03:51.443741 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:03:51.444621 | orchestrator | ok: [testbed-manager] 2025-04-09 09:03:51.447266 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:03:51.448273 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:03:51.448318 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:03:51.449388 | orchestrator | 2025-04-09 09:03:51.450337 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-09 09:03:51.451590 | orchestrator | Wednesday 09 April 2025 09:03:51 +0000 (0:00:01.569) 0:05:21.957 ******* 2025-04-09 09:03:53.141763 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:03:53.142356 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:03:53.142406 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:03:53.143043 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:03:53.144769 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:03:53.144833 | orchestrator | changed: [testbed-manager] 2025-04-09 09:03:53.148472 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:03:53.222859 | orchestrator | 2025-04-09 09:03:53.222934 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-09 09:03:53.222951 | orchestrator | Wednesday 09 April 2025 09:03:53 +0000 (0:00:01.699) 0:05:23.657 ******* 2025-04-09 09:03:53.222976 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:03:53.264361 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:03:53.323285 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:03:53.363331 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:03:53.400036 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:03:53.475340 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:03:53.476529 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:03:53.478635 | orchestrator | 2025-04-09 09:03:53.479434 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-09 09:03:53.480647 | orchestrator | Wednesday 09 April 2025 09:03:53 +0000 (0:00:00.333) 0:05:23.991 ******* 2025-04-09 09:03:53.556832 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:03:53.594842 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:03:53.649600 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:03:53.699711 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:03:53.741732 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:03:53.985743 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:03:53.986099 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:03:53.987399 | orchestrator | 2025-04-09 09:03:53.987630 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-09 09:03:53.988397 | orchestrator | Wednesday 09 April 2025 09:03:53 +0000 (0:00:00.510) 0:05:24.502 ******* 2025-04-09 09:03:54.143167 | orchestrator | ok: [testbed-manager] 2025-04-09 09:03:54.198116 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:03:54.238958 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:03:54.278873 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:03:54.369285 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:03:54.369827 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:03:54.370095 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:03:54.371623 | orchestrator | 2025-04-09 09:03:54.372179 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-09 09:03:54.372980 | orchestrator | Wednesday 09 April 2025 09:03:54 +0000 (0:00:00.384) 0:05:24.886 ******* 2025-04-09 09:03:54.448732 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:03:54.489719 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:03:54.529563 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:03:54.571280 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:03:54.609366 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:03:54.674798 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:03:54.674918 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:03:54.675788 | orchestrator | 2025-04-09 09:03:54.676588 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-09 09:03:54.677545 | orchestrator | Wednesday 09 April 2025 09:03:54 +0000 (0:00:00.305) 0:05:25.192 ******* 2025-04-09 09:03:54.805527 | orchestrator | ok: [testbed-manager] 2025-04-09 09:03:54.860516 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:03:54.952466 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:03:55.201524 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:03:55.299577 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:03:55.300091 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:03:55.300842 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:03:55.301792 | orchestrator | 2025-04-09 09:03:55.302703 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-04-09 09:03:55.302960 | orchestrator | Wednesday 09 April 2025 09:03:55 +0000 (0:00:00.623) 0:05:25.815 ******* 2025-04-09 09:03:55.427961 | orchestrator | ok: [testbed-manager] =>  2025-04-09 09:03:55.428827 | orchestrator |  docker_version: 5:27.5.1 2025-04-09 09:03:55.468105 | orchestrator | ok: [testbed-node-0] =>  2025-04-09 09:03:55.468516 | orchestrator |  docker_version: 5:27.5.1 2025-04-09 09:03:55.511590 | orchestrator | ok: [testbed-node-1] =>  2025-04-09 09:03:55.512687 | orchestrator |  docker_version: 5:27.5.1 2025-04-09 09:03:55.562473 | orchestrator | ok: [testbed-node-2] =>  2025-04-09 09:03:55.562592 | orchestrator |  docker_version: 5:27.5.1 2025-04-09 09:03:55.630111 | orchestrator | ok: [testbed-node-3] =>  2025-04-09 09:03:55.631069 | orchestrator |  docker_version: 5:27.5.1 2025-04-09 09:03:55.631908 | orchestrator | ok: [testbed-node-4] =>  2025-04-09 09:03:55.632883 | orchestrator |  docker_version: 5:27.5.1 2025-04-09 09:03:55.636367 | orchestrator | ok: [testbed-node-5] =>  2025-04-09 09:03:55.636858 | orchestrator |  docker_version: 5:27.5.1 2025-04-09 09:03:55.636883 | orchestrator | 2025-04-09 09:03:55.636901 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-04-09 09:03:55.636921 | orchestrator | Wednesday 09 April 2025 09:03:55 +0000 (0:00:00.332) 0:05:26.148 ******* 2025-04-09 09:03:55.731882 | orchestrator | ok: [testbed-manager] =>  2025-04-09 09:03:55.732353 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-09 09:03:55.770827 | orchestrator | ok: [testbed-node-0] =>  2025-04-09 09:03:55.771061 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-09 09:03:55.833142 | orchestrator | ok: [testbed-node-1] =>  2025-04-09 09:03:55.833587 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-09 09:03:55.872805 | orchestrator | ok: [testbed-node-2] =>  2025-04-09 09:03:55.873143 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-09 09:03:55.914073 | orchestrator | ok: [testbed-node-3] =>  2025-04-09 09:03:55.914471 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-09 09:03:55.999856 | orchestrator | ok: [testbed-node-4] =>  2025-04-09 09:03:56.001244 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-09 09:03:56.001989 | orchestrator | ok: [testbed-node-5] =>  2025-04-09 09:03:56.003289 | orchestrator |  docker_cli_version: 5:27.5.1 2025-04-09 09:03:56.004151 | orchestrator | 2025-04-09 09:03:56.004629 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-09 09:03:56.005106 | orchestrator | Wednesday 09 April 2025 09:03:55 +0000 (0:00:00.367) 0:05:26.515 ******* 2025-04-09 09:03:56.101500 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:03:56.144633 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:03:56.184601 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:03:56.221985 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:03:56.256985 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:03:56.327365 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:03:56.328504 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:03:56.328999 | orchestrator | 2025-04-09 09:03:56.330318 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-09 09:03:56.337874 | orchestrator | Wednesday 09 April 2025 09:03:56 +0000 (0:00:00.329) 0:05:26.845 ******* 2025-04-09 09:03:56.435761 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:03:56.474739 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:03:56.514598 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:03:56.551614 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:03:56.597812 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:03:56.671188 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:03:56.672796 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:03:56.673423 | orchestrator | 2025-04-09 09:03:56.674644 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-09 09:03:56.677295 | orchestrator | Wednesday 09 April 2025 09:03:56 +0000 (0:00:00.342) 0:05:27.188 ******* 2025-04-09 09:03:57.164787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:03:57.165248 | orchestrator | 2025-04-09 09:03:57.165296 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-09 09:03:57.167075 | orchestrator | Wednesday 09 April 2025 09:03:57 +0000 (0:00:00.493) 0:05:27.681 ******* 2025-04-09 09:03:58.005240 | orchestrator | ok: [testbed-manager] 2025-04-09 09:03:58.005427 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:03:58.006106 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:03:58.007795 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:03:58.008430 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:03:58.009794 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:03:58.010767 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:03:58.012470 | orchestrator | 2025-04-09 09:03:58.013511 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-09 09:03:58.014479 | orchestrator | Wednesday 09 April 2025 09:03:57 +0000 (0:00:00.839) 0:05:28.521 ******* 2025-04-09 09:04:00.874832 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:04:00.875093 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:04:00.875837 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:04:00.876597 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:04:00.877600 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:04:00.877782 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:00.878829 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:04:00.879312 | orchestrator | 2025-04-09 09:04:00.880009 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-09 09:04:00.880677 | orchestrator | Wednesday 09 April 2025 09:04:00 +0000 (0:00:02.868) 0:05:31.389 ******* 2025-04-09 09:04:00.961893 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-09 09:04:00.963167 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-09 09:04:00.964465 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-09 09:04:01.263005 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:04:01.263552 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-09 09:04:01.265007 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-09 09:04:01.266403 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-09 09:04:01.343900 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:04:01.344014 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-09 09:04:01.430352 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-09 09:04:01.430467 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-09 09:04:01.431056 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-09 09:04:01.431879 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-09 09:04:01.432462 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-09 09:04:01.535740 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:04:01.536271 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-09 09:04:01.538452 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-09 09:04:01.617458 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-09 09:04:01.618791 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:04:01.620233 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-09 09:04:01.621605 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-09 09:04:01.622556 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-09 09:04:01.769683 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:04:01.770310 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:04:01.772337 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-09 09:04:01.773787 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-09 09:04:01.774849 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-09 09:04:01.776141 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:04:01.777593 | orchestrator | 2025-04-09 09:04:01.778314 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-09 09:04:01.779314 | orchestrator | Wednesday 09 April 2025 09:04:01 +0000 (0:00:00.894) 0:05:32.284 ******* 2025-04-09 09:04:07.641358 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:07.642874 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:07.642989 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:07.643601 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:07.644354 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:07.645358 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:07.645726 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:07.646729 | orchestrator | 2025-04-09 09:04:07.647479 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-09 09:04:07.648352 | orchestrator | Wednesday 09 April 2025 09:04:07 +0000 (0:00:05.872) 0:05:38.156 ******* 2025-04-09 09:04:08.676134 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:08.676810 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:08.679060 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:08.679391 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:08.680468 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:08.681219 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:08.681728 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:08.682648 | orchestrator | 2025-04-09 09:04:08.683217 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-09 09:04:08.683994 | orchestrator | Wednesday 09 April 2025 09:04:08 +0000 (0:00:01.033) 0:05:39.190 ******* 2025-04-09 09:04:15.706586 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:15.707277 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:15.707415 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:15.709874 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:15.711135 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:15.711782 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:15.712731 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:15.713375 | orchestrator | 2025-04-09 09:04:15.714219 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-09 09:04:15.714958 | orchestrator | Wednesday 09 April 2025 09:04:15 +0000 (0:00:07.029) 0:05:46.220 ******* 2025-04-09 09:04:19.133828 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:19.134166 | orchestrator | changed: [testbed-manager] 2025-04-09 09:04:19.134468 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:19.135505 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:19.135781 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:19.137123 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:19.140977 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:19.142107 | orchestrator | 2025-04-09 09:04:19.142827 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-09 09:04:19.143356 | orchestrator | Wednesday 09 April 2025 09:04:19 +0000 (0:00:03.427) 0:05:49.648 ******* 2025-04-09 09:04:20.481887 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:20.482321 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:20.482981 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:20.486839 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:20.487853 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:20.488985 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:20.489854 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:20.490949 | orchestrator | 2025-04-09 09:04:20.491307 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-09 09:04:20.492072 | orchestrator | Wednesday 09 April 2025 09:04:20 +0000 (0:00:01.347) 0:05:50.995 ******* 2025-04-09 09:04:21.827593 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:21.828011 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:21.829154 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:21.830233 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:21.831035 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:21.832361 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:21.832958 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:21.833645 | orchestrator | 2025-04-09 09:04:21.834571 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-09 09:04:21.834883 | orchestrator | Wednesday 09 April 2025 09:04:21 +0000 (0:00:01.344) 0:05:52.339 ******* 2025-04-09 09:04:22.053609 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:04:22.135303 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:04:22.210140 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:04:22.292393 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:04:23.529270 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:04:23.529756 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:04:23.531291 | orchestrator | changed: [testbed-manager] 2025-04-09 09:04:23.532041 | orchestrator | 2025-04-09 09:04:23.535232 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-09 09:04:23.535662 | orchestrator | Wednesday 09 April 2025 09:04:23 +0000 (0:00:01.704) 0:05:54.044 ******* 2025-04-09 09:04:33.692997 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:33.693179 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:33.693265 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:33.695860 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:33.696235 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:33.697677 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:33.699426 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:33.699743 | orchestrator | 2025-04-09 09:04:33.700850 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-09 09:04:33.701032 | orchestrator | Wednesday 09 April 2025 09:04:33 +0000 (0:00:10.161) 0:06:04.205 ******* 2025-04-09 09:04:34.388351 | orchestrator | changed: [testbed-manager] 2025-04-09 09:04:34.965639 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:34.967134 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:34.967218 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:34.967258 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:34.968214 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:34.968602 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:34.969453 | orchestrator | 2025-04-09 09:04:34.969767 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-09 09:04:34.970268 | orchestrator | Wednesday 09 April 2025 09:04:34 +0000 (0:00:01.274) 0:06:05.479 ******* 2025-04-09 09:04:43.985956 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:43.987073 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:43.988236 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:43.991147 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:43.991241 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:43.992388 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:43.993639 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:43.994354 | orchestrator | 2025-04-09 09:04:43.994922 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-09 09:04:43.995772 | orchestrator | Wednesday 09 April 2025 09:04:43 +0000 (0:00:09.022) 0:06:14.501 ******* 2025-04-09 09:04:54.426139 | orchestrator | ok: [testbed-manager] 2025-04-09 09:04:54.426524 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:04:54.426560 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:04:54.426583 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:04:54.430357 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:04:54.431300 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:04:54.432144 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:04:54.433234 | orchestrator | 2025-04-09 09:04:54.433778 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-09 09:04:54.434824 | orchestrator | Wednesday 09 April 2025 09:04:54 +0000 (0:00:10.435) 0:06:24.937 ******* 2025-04-09 09:04:54.908370 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-09 09:04:55.703288 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-09 09:04:55.703444 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-09 09:04:55.709089 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-09 09:04:55.709324 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-09 09:04:55.709350 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-09 09:04:55.709364 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-09 09:04:55.709379 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-09 09:04:55.709397 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-09 09:04:55.710402 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-09 09:04:55.711341 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-09 09:04:55.712544 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-09 09:04:55.713518 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-09 09:04:55.714340 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-09 09:04:55.715023 | orchestrator | 2025-04-09 09:04:55.715671 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-09 09:04:55.716371 | orchestrator | Wednesday 09 April 2025 09:04:55 +0000 (0:00:01.280) 0:06:26.217 ******* 2025-04-09 09:04:55.837891 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:04:55.902929 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:04:55.973078 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:04:56.035700 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:04:56.100915 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:04:56.212396 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:04:56.212885 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:04:56.213355 | orchestrator | 2025-04-09 09:04:56.214450 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-09 09:04:56.220170 | orchestrator | Wednesday 09 April 2025 09:04:56 +0000 (0:00:00.512) 0:06:26.729 ******* 2025-04-09 09:05:00.017103 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:00.017950 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:00.018930 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:00.019827 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:00.022535 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:00.023820 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:00.024044 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:00.024418 | orchestrator | 2025-04-09 09:05:00.025211 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-09 09:05:00.025380 | orchestrator | Wednesday 09 April 2025 09:05:00 +0000 (0:00:03.799) 0:06:30.529 ******* 2025-04-09 09:05:00.172384 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:00.250427 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:00.322344 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:00.398331 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:00.471955 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:00.588376 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:00.588514 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:00.589826 | orchestrator | 2025-04-09 09:05:00.590777 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-09 09:05:00.591448 | orchestrator | Wednesday 09 April 2025 09:05:00 +0000 (0:00:00.573) 0:06:31.103 ******* 2025-04-09 09:05:00.669284 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-09 09:05:00.669721 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-09 09:05:00.743807 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:00.745588 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-09 09:05:00.745675 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-09 09:05:00.819238 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:00.819620 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-09 09:05:00.820882 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-09 09:05:00.929361 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:00.931280 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-09 09:05:00.933334 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-09 09:05:01.000515 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:01.002380 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-09 09:05:01.003010 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-09 09:05:01.096868 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:01.097324 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-09 09:05:01.098874 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-09 09:05:01.257583 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:01.260829 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-09 09:05:01.262091 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-09 09:05:01.265819 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:01.266903 | orchestrator | 2025-04-09 09:05:01.268770 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-09 09:05:01.269601 | orchestrator | Wednesday 09 April 2025 09:05:01 +0000 (0:00:00.668) 0:06:31.771 ******* 2025-04-09 09:05:01.413690 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:01.487654 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:01.554592 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:01.625284 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:01.700678 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:01.805049 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:01.806513 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:01.807406 | orchestrator | 2025-04-09 09:05:01.808949 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-09 09:05:01.810111 | orchestrator | Wednesday 09 April 2025 09:05:01 +0000 (0:00:00.547) 0:06:32.319 ******* 2025-04-09 09:05:01.949906 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:02.047168 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:02.140969 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:02.218244 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:02.302521 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:02.409246 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:02.410356 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:02.412392 | orchestrator | 2025-04-09 09:05:02.413807 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-09 09:05:02.415818 | orchestrator | Wednesday 09 April 2025 09:05:02 +0000 (0:00:00.604) 0:06:32.923 ******* 2025-04-09 09:05:02.767067 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:02.835635 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:02.900993 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:02.976244 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:03.044916 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:03.180353 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:03.182074 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:03.183706 | orchestrator | 2025-04-09 09:05:03.184916 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-09 09:05:03.186762 | orchestrator | Wednesday 09 April 2025 09:05:03 +0000 (0:00:00.771) 0:06:33.695 ******* 2025-04-09 09:05:04.893920 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:04.894960 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:04.896402 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:04.897788 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:04.899304 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:04.899967 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:04.901345 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:04.901780 | orchestrator | 2025-04-09 09:05:04.902899 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-09 09:05:04.903781 | orchestrator | Wednesday 09 April 2025 09:05:04 +0000 (0:00:01.713) 0:06:35.408 ******* 2025-04-09 09:05:05.806316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:05:05.807418 | orchestrator | 2025-04-09 09:05:05.808855 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-09 09:05:05.810448 | orchestrator | Wednesday 09 April 2025 09:05:05 +0000 (0:00:00.911) 0:06:36.319 ******* 2025-04-09 09:05:06.460325 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:06.885500 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:06.886243 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:06.887338 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:06.888696 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:06.889590 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:06.890940 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:06.891856 | orchestrator | 2025-04-09 09:05:06.893003 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-09 09:05:06.893988 | orchestrator | Wednesday 09 April 2025 09:05:06 +0000 (0:00:01.081) 0:06:37.401 ******* 2025-04-09 09:05:07.309791 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:07.734084 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:07.734377 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:07.735073 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:07.736211 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:07.736934 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:07.737788 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:07.738247 | orchestrator | 2025-04-09 09:05:07.739044 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-09 09:05:07.739975 | orchestrator | Wednesday 09 April 2025 09:05:07 +0000 (0:00:00.847) 0:06:38.248 ******* 2025-04-09 09:05:09.076671 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:09.077225 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:09.077273 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:09.077734 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:09.078449 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:09.079147 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:09.079744 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:09.080793 | orchestrator | 2025-04-09 09:05:09.081478 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-09 09:05:09.082228 | orchestrator | Wednesday 09 April 2025 09:05:09 +0000 (0:00:01.340) 0:06:39.589 ******* 2025-04-09 09:05:09.217345 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:10.485252 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:10.485944 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:10.486837 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:10.487660 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:10.488826 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:10.490235 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:10.490565 | orchestrator | 2025-04-09 09:05:10.491968 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-09 09:05:10.492645 | orchestrator | Wednesday 09 April 2025 09:05:10 +0000 (0:00:01.409) 0:06:40.999 ******* 2025-04-09 09:05:11.847520 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:11.851324 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:11.851631 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:11.851663 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:11.851684 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:11.853149 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:11.853894 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:11.855020 | orchestrator | 2025-04-09 09:05:11.855675 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-09 09:05:11.858332 | orchestrator | Wednesday 09 April 2025 09:05:11 +0000 (0:00:01.360) 0:06:42.360 ******* 2025-04-09 09:05:13.483613 | orchestrator | changed: [testbed-manager] 2025-04-09 09:05:13.487397 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:13.488879 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:13.489767 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:13.491217 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:13.491465 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:13.492676 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:13.493816 | orchestrator | 2025-04-09 09:05:13.495010 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-09 09:05:13.495859 | orchestrator | Wednesday 09 April 2025 09:05:13 +0000 (0:00:01.636) 0:06:43.997 ******* 2025-04-09 09:05:14.380920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:05:14.381159 | orchestrator | 2025-04-09 09:05:14.382267 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-09 09:05:14.383234 | orchestrator | Wednesday 09 April 2025 09:05:14 +0000 (0:00:00.900) 0:06:44.898 ******* 2025-04-09 09:05:15.864576 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:15.864782 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:15.865901 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:15.867725 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:15.868300 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:15.874083 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:17.053766 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:17.053866 | orchestrator | 2025-04-09 09:05:17.053885 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-09 09:05:17.053927 | orchestrator | Wednesday 09 April 2025 09:05:15 +0000 (0:00:01.480) 0:06:46.378 ******* 2025-04-09 09:05:17.053957 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:17.054985 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:17.056898 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:17.060335 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:17.060530 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:17.060558 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:17.060603 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:17.061861 | orchestrator | 2025-04-09 09:05:17.062673 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-09 09:05:17.064005 | orchestrator | Wednesday 09 April 2025 09:05:17 +0000 (0:00:01.187) 0:06:47.566 ******* 2025-04-09 09:05:18.480481 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:18.481790 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:18.483330 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:18.483854 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:18.484771 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:18.485909 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:18.486632 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:18.487382 | orchestrator | 2025-04-09 09:05:18.488944 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-09 09:05:18.489810 | orchestrator | Wednesday 09 April 2025 09:05:18 +0000 (0:00:01.426) 0:06:48.992 ******* 2025-04-09 09:05:19.645510 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:19.647411 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:19.649017 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:19.649050 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:19.649380 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:19.649963 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:19.650692 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:19.651526 | orchestrator | 2025-04-09 09:05:19.651717 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-09 09:05:19.652361 | orchestrator | Wednesday 09 April 2025 09:05:19 +0000 (0:00:01.167) 0:06:50.160 ******* 2025-04-09 09:05:21.058936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:05:21.059720 | orchestrator | 2025-04-09 09:05:21.059840 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-09 09:05:21.059916 | orchestrator | Wednesday 09 April 2025 09:05:20 +0000 (0:00:00.925) 0:06:51.085 ******* 2025-04-09 09:05:21.061750 | orchestrator | 2025-04-09 09:05:21.061818 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-09 09:05:21.062227 | orchestrator | Wednesday 09 April 2025 09:05:20 +0000 (0:00:00.038) 0:06:51.123 ******* 2025-04-09 09:05:21.062669 | orchestrator | 2025-04-09 09:05:21.063069 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-09 09:05:21.063690 | orchestrator | Wednesday 09 April 2025 09:05:20 +0000 (0:00:00.043) 0:06:51.167 ******* 2025-04-09 09:05:21.064346 | orchestrator | 2025-04-09 09:05:21.064731 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-09 09:05:21.065377 | orchestrator | Wednesday 09 April 2025 09:05:20 +0000 (0:00:00.038) 0:06:51.206 ******* 2025-04-09 09:05:21.065591 | orchestrator | 2025-04-09 09:05:21.065841 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-09 09:05:21.066241 | orchestrator | Wednesday 09 April 2025 09:05:20 +0000 (0:00:00.037) 0:06:51.244 ******* 2025-04-09 09:05:21.066956 | orchestrator | 2025-04-09 09:05:21.067038 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-09 09:05:21.067444 | orchestrator | Wednesday 09 April 2025 09:05:20 +0000 (0:00:00.247) 0:06:51.491 ******* 2025-04-09 09:05:21.067805 | orchestrator | 2025-04-09 09:05:21.068146 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-09 09:05:21.068336 | orchestrator | Wednesday 09 April 2025 09:05:21 +0000 (0:00:00.040) 0:06:51.531 ******* 2025-04-09 09:05:21.069294 | orchestrator | 2025-04-09 09:05:21.069557 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-09 09:05:21.069693 | orchestrator | Wednesday 09 April 2025 09:05:21 +0000 (0:00:00.042) 0:06:51.573 ******* 2025-04-09 09:05:22.269402 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:22.270210 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:22.271001 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:22.277238 | orchestrator | 2025-04-09 09:05:23.772476 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-09 09:05:23.772581 | orchestrator | Wednesday 09 April 2025 09:05:22 +0000 (0:00:01.209) 0:06:52.783 ******* 2025-04-09 09:05:23.772614 | orchestrator | changed: [testbed-manager] 2025-04-09 09:05:23.773081 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:23.773997 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:23.775737 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:23.776753 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:23.777742 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:23.778414 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:23.781313 | orchestrator | 2025-04-09 09:05:23.782307 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-09 09:05:23.783131 | orchestrator | Wednesday 09 April 2025 09:05:23 +0000 (0:00:01.505) 0:06:54.288 ******* 2025-04-09 09:05:25.058071 | orchestrator | changed: [testbed-manager] 2025-04-09 09:05:25.058852 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:25.060624 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:25.061437 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:25.061470 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:25.062229 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:25.062960 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:25.063784 | orchestrator | 2025-04-09 09:05:25.064575 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-09 09:05:25.066058 | orchestrator | Wednesday 09 April 2025 09:05:25 +0000 (0:00:01.282) 0:06:55.571 ******* 2025-04-09 09:05:25.208106 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:27.330415 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:27.331224 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:27.333492 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:27.334558 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:27.335997 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:27.336767 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:27.337421 | orchestrator | 2025-04-09 09:05:27.337880 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-09 09:05:27.338667 | orchestrator | Wednesday 09 April 2025 09:05:27 +0000 (0:00:02.273) 0:06:57.845 ******* 2025-04-09 09:05:27.440915 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:27.441129 | orchestrator | 2025-04-09 09:05:27.442007 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-09 09:05:27.442577 | orchestrator | Wednesday 09 April 2025 09:05:27 +0000 (0:00:00.110) 0:06:57.956 ******* 2025-04-09 09:05:28.692644 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:28.693327 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:28.693360 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:28.693384 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:28.693584 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:28.694242 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:28.694432 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:28.694913 | orchestrator | 2025-04-09 09:05:28.695389 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-09 09:05:28.695526 | orchestrator | Wednesday 09 April 2025 09:05:28 +0000 (0:00:01.251) 0:06:59.208 ******* 2025-04-09 09:05:28.855329 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:28.932289 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:29.027846 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:29.098578 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:29.169431 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:29.309410 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:29.310097 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:29.310579 | orchestrator | 2025-04-09 09:05:29.311462 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-09 09:05:29.311941 | orchestrator | Wednesday 09 April 2025 09:05:29 +0000 (0:00:00.619) 0:06:59.827 ******* 2025-04-09 09:05:30.362624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:05:30.363409 | orchestrator | 2025-04-09 09:05:30.364369 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-09 09:05:30.367150 | orchestrator | Wednesday 09 April 2025 09:05:30 +0000 (0:00:01.049) 0:07:00.877 ******* 2025-04-09 09:05:30.824886 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:31.259431 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:31.260628 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:31.260652 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:31.260733 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:31.261626 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:31.262359 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:31.263012 | orchestrator | 2025-04-09 09:05:31.263685 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-09 09:05:31.264440 | orchestrator | Wednesday 09 April 2025 09:05:31 +0000 (0:00:00.897) 0:07:01.774 ******* 2025-04-09 09:05:34.024706 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-09 09:05:34.031404 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-09 09:05:34.031449 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-09 09:05:34.031769 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-09 09:05:34.031799 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-09 09:05:34.032069 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-09 09:05:34.032879 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-09 09:05:34.034404 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-09 09:05:34.034721 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-09 09:05:34.035470 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-09 09:05:34.035946 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-09 09:05:34.036390 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-09 09:05:34.039361 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-09 09:05:34.039988 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-09 09:05:34.040083 | orchestrator | 2025-04-09 09:05:34.040637 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-09 09:05:34.043774 | orchestrator | Wednesday 09 April 2025 09:05:34 +0000 (0:00:02.763) 0:07:04.538 ******* 2025-04-09 09:05:34.170403 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:34.239564 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:34.315312 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:34.381380 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:34.450429 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:34.548668 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:34.549356 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:34.551105 | orchestrator | 2025-04-09 09:05:34.555240 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-09 09:05:35.367096 | orchestrator | Wednesday 09 April 2025 09:05:34 +0000 (0:00:00.528) 0:07:05.067 ******* 2025-04-09 09:05:35.367254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:05:35.370741 | orchestrator | 2025-04-09 09:05:35.370781 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-09 09:05:35.371602 | orchestrator | Wednesday 09 April 2025 09:05:35 +0000 (0:00:00.813) 0:07:05.880 ******* 2025-04-09 09:05:35.811470 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:36.453536 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:36.453682 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:36.454996 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:36.455876 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:36.456702 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:36.457799 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:36.458997 | orchestrator | 2025-04-09 09:05:36.460387 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-09 09:05:36.460785 | orchestrator | Wednesday 09 April 2025 09:05:36 +0000 (0:00:01.089) 0:07:06.969 ******* 2025-04-09 09:05:36.893093 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:37.300552 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:37.301776 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:37.303922 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:37.305354 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:37.306731 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:37.306773 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:37.306994 | orchestrator | 2025-04-09 09:05:37.307452 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-09 09:05:37.308145 | orchestrator | Wednesday 09 April 2025 09:05:37 +0000 (0:00:00.844) 0:07:07.814 ******* 2025-04-09 09:05:37.445962 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:37.515366 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:37.581311 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:37.655134 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:37.724326 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:37.839646 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:37.841789 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:37.842909 | orchestrator | 2025-04-09 09:05:37.843528 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-09 09:05:37.844316 | orchestrator | Wednesday 09 April 2025 09:05:37 +0000 (0:00:00.540) 0:07:08.354 ******* 2025-04-09 09:05:39.265880 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:39.266298 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:39.268293 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:39.268494 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:39.272744 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:39.274343 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:39.274664 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:39.275736 | orchestrator | 2025-04-09 09:05:39.276361 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-09 09:05:39.277442 | orchestrator | Wednesday 09 April 2025 09:05:39 +0000 (0:00:01.425) 0:07:09.780 ******* 2025-04-09 09:05:39.427568 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:39.507672 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:39.580564 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:39.649615 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:39.968870 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:40.085521 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:40.086701 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:40.087883 | orchestrator | 2025-04-09 09:05:40.088797 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-09 09:05:40.090225 | orchestrator | Wednesday 09 April 2025 09:05:40 +0000 (0:00:00.817) 0:07:10.598 ******* 2025-04-09 09:05:47.331123 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:47.333112 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:47.334620 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:47.334686 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:47.335734 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:47.336723 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:47.338437 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:47.338670 | orchestrator | 2025-04-09 09:05:47.339693 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-09 09:05:47.340747 | orchestrator | Wednesday 09 April 2025 09:05:47 +0000 (0:00:07.244) 0:07:17.842 ******* 2025-04-09 09:05:48.668297 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:48.668600 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:48.668640 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:48.669058 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:48.669770 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:48.670523 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:48.671021 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:48.671146 | orchestrator | 2025-04-09 09:05:48.671863 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-09 09:05:48.672094 | orchestrator | Wednesday 09 April 2025 09:05:48 +0000 (0:00:01.342) 0:07:19.184 ******* 2025-04-09 09:05:50.375848 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:50.377340 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:50.377798 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:50.380329 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:50.383990 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:50.384859 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:50.386075 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:50.386939 | orchestrator | 2025-04-09 09:05:50.388022 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-09 09:05:50.389034 | orchestrator | Wednesday 09 April 2025 09:05:50 +0000 (0:00:01.705) 0:07:20.890 ******* 2025-04-09 09:05:52.277002 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:52.278336 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:05:52.278465 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:05:52.279021 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:05:52.280266 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:05:52.280965 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:05:52.281993 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:05:52.284223 | orchestrator | 2025-04-09 09:05:52.284404 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-09 09:05:52.285616 | orchestrator | Wednesday 09 April 2025 09:05:52 +0000 (0:00:01.899) 0:07:22.790 ******* 2025-04-09 09:05:53.164247 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:53.164438 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:53.165259 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:53.166316 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:53.166569 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:53.166931 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:53.167451 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:53.167878 | orchestrator | 2025-04-09 09:05:53.167909 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-09 09:05:53.168128 | orchestrator | Wednesday 09 April 2025 09:05:53 +0000 (0:00:00.887) 0:07:23.677 ******* 2025-04-09 09:05:53.340312 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:53.411736 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:53.480801 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:53.556593 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:53.635530 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:54.104337 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:54.104562 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:54.104589 | orchestrator | 2025-04-09 09:05:54.104613 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-09 09:05:54.105039 | orchestrator | Wednesday 09 April 2025 09:05:54 +0000 (0:00:00.942) 0:07:24.620 ******* 2025-04-09 09:05:54.249418 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:05:54.339859 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:05:54.428785 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:05:54.498682 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:05:54.579083 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:05:54.685135 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:05:54.686806 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:05:54.691237 | orchestrator | 2025-04-09 09:05:54.691268 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-09 09:05:54.692498 | orchestrator | Wednesday 09 April 2025 09:05:54 +0000 (0:00:00.580) 0:07:25.200 ******* 2025-04-09 09:05:55.046233 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:55.117987 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:55.190118 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:55.267451 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:55.335373 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:55.449446 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:55.450882 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:55.454272 | orchestrator | 2025-04-09 09:05:55.454697 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-09 09:05:55.454749 | orchestrator | Wednesday 09 April 2025 09:05:55 +0000 (0:00:00.762) 0:07:25.963 ******* 2025-04-09 09:05:55.605644 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:55.682146 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:55.755956 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:55.824000 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:55.892641 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:56.021506 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:56.022384 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:56.023721 | orchestrator | 2025-04-09 09:05:56.028282 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-09 09:05:56.168572 | orchestrator | Wednesday 09 April 2025 09:05:56 +0000 (0:00:00.574) 0:07:26.538 ******* 2025-04-09 09:05:56.168643 | orchestrator | ok: [testbed-manager] 2025-04-09 09:05:56.270073 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:05:56.337768 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:05:56.407230 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:05:56.482424 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:05:56.600769 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:05:56.602711 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:05:56.604048 | orchestrator | 2025-04-09 09:05:56.605105 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-09 09:05:56.606212 | orchestrator | Wednesday 09 April 2025 09:05:56 +0000 (0:00:00.575) 0:07:27.114 ******* 2025-04-09 09:06:02.320802 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:02.321302 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:02.321607 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:02.323372 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:02.324244 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:02.324975 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:02.325743 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:02.326499 | orchestrator | 2025-04-09 09:06:02.327953 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-09 09:06:02.329065 | orchestrator | Wednesday 09 April 2025 09:06:02 +0000 (0:00:05.721) 0:07:32.835 ******* 2025-04-09 09:06:02.485798 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:06:02.555687 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:06:02.635230 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:06:02.972475 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:06:03.202726 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:06:03.203522 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:06:03.204791 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:06:03.206465 | orchestrator | 2025-04-09 09:06:03.209070 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-09 09:06:03.210219 | orchestrator | Wednesday 09 April 2025 09:06:03 +0000 (0:00:00.880) 0:07:33.715 ******* 2025-04-09 09:06:04.086541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:06:04.087503 | orchestrator | 2025-04-09 09:06:04.088940 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-09 09:06:04.090239 | orchestrator | Wednesday 09 April 2025 09:06:04 +0000 (0:00:00.886) 0:07:34.602 ******* 2025-04-09 09:06:05.886405 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:05.888398 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:05.888436 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:05.889465 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:05.891395 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:05.892477 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:05.893794 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:05.894052 | orchestrator | 2025-04-09 09:06:05.895208 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-09 09:06:05.896100 | orchestrator | Wednesday 09 April 2025 09:06:05 +0000 (0:00:01.797) 0:07:36.400 ******* 2025-04-09 09:06:07.057353 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:07.059326 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:07.063834 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:07.065594 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:07.068060 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:07.068089 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:07.070156 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:07.070829 | orchestrator | 2025-04-09 09:06:07.071491 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-09 09:06:07.074379 | orchestrator | Wednesday 09 April 2025 09:06:07 +0000 (0:00:01.163) 0:07:37.564 ******* 2025-04-09 09:06:07.539309 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:07.612728 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:08.197745 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:08.197909 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:08.197931 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:08.197950 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:08.198312 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:08.199657 | orchestrator | 2025-04-09 09:06:08.200304 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-09 09:06:08.200332 | orchestrator | Wednesday 09 April 2025 09:06:08 +0000 (0:00:01.148) 0:07:38.713 ******* 2025-04-09 09:06:09.914685 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-09 09:06:09.914847 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-09 09:06:09.915719 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-09 09:06:09.916466 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-09 09:06:09.917470 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-09 09:06:09.919498 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-09 09:06:09.919654 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-09 09:06:09.919683 | orchestrator | 2025-04-09 09:06:09.920431 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-09 09:06:09.920920 | orchestrator | Wednesday 09 April 2025 09:06:09 +0000 (0:00:01.717) 0:07:40.430 ******* 2025-04-09 09:06:10.941295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:06:10.941824 | orchestrator | 2025-04-09 09:06:10.942691 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-09 09:06:10.943580 | orchestrator | Wednesday 09 April 2025 09:06:10 +0000 (0:00:01.025) 0:07:41.455 ******* 2025-04-09 09:06:19.916024 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:06:19.917855 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:06:19.917896 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:06:19.918157 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:06:19.918730 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:06:19.919587 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:06:19.919938 | orchestrator | changed: [testbed-manager] 2025-04-09 09:06:19.920697 | orchestrator | 2025-04-09 09:06:19.921114 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-09 09:06:19.924487 | orchestrator | Wednesday 09 April 2025 09:06:19 +0000 (0:00:08.971) 0:07:50.427 ******* 2025-04-09 09:06:21.832296 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:21.833722 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:21.834755 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:21.836583 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:21.838331 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:21.839068 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:21.839829 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:21.840987 | orchestrator | 2025-04-09 09:06:21.844413 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-09 09:06:23.441059 | orchestrator | Wednesday 09 April 2025 09:06:21 +0000 (0:00:01.917) 0:07:52.344 ******* 2025-04-09 09:06:23.441230 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:23.450497 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:23.451241 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:23.458185 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:23.459149 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:23.459199 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:23.462461 | orchestrator | 2025-04-09 09:06:23.462490 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-09 09:06:23.462888 | orchestrator | Wednesday 09 April 2025 09:06:23 +0000 (0:00:01.606) 0:07:53.951 ******* 2025-04-09 09:06:24.761392 | orchestrator | changed: [testbed-manager] 2025-04-09 09:06:24.761560 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:06:24.765050 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:06:24.766727 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:06:24.767085 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:06:24.771698 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:06:24.772498 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:06:24.773195 | orchestrator | 2025-04-09 09:06:24.774220 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-09 09:06:24.774997 | orchestrator | 2025-04-09 09:06:24.776144 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-09 09:06:24.778412 | orchestrator | Wednesday 09 April 2025 09:06:24 +0000 (0:00:01.324) 0:07:55.276 ******* 2025-04-09 09:06:24.906155 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:06:24.971446 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:06:25.045029 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:06:25.111809 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:06:25.178280 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:06:25.299803 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:06:25.300727 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:06:25.302128 | orchestrator | 2025-04-09 09:06:25.305189 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-09 09:06:25.306738 | orchestrator | 2025-04-09 09:06:25.306775 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-09 09:06:25.308268 | orchestrator | Wednesday 09 April 2025 09:06:25 +0000 (0:00:00.539) 0:07:55.815 ******* 2025-04-09 09:06:26.669624 | orchestrator | changed: [testbed-manager] 2025-04-09 09:06:26.670572 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:06:26.672048 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:06:26.672625 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:06:26.673535 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:06:26.674301 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:06:26.675970 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:06:26.676799 | orchestrator | 2025-04-09 09:06:26.679024 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-09 09:06:26.679675 | orchestrator | Wednesday 09 April 2025 09:06:26 +0000 (0:00:01.367) 0:07:57.183 ******* 2025-04-09 09:06:28.480016 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:28.480294 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:28.480821 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:28.481593 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:28.482605 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:28.483336 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:28.484365 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:28.485451 | orchestrator | 2025-04-09 09:06:28.485605 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-09 09:06:28.486806 | orchestrator | Wednesday 09 April 2025 09:06:28 +0000 (0:00:01.810) 0:07:58.994 ******* 2025-04-09 09:06:28.627592 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:06:28.703040 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:06:28.769691 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:06:28.892781 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:06:29.004752 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:06:29.438119 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:06:29.438333 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:06:29.439228 | orchestrator | 2025-04-09 09:06:29.439857 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-09 09:06:29.440854 | orchestrator | Wednesday 09 April 2025 09:06:29 +0000 (0:00:00.959) 0:07:59.953 ******* 2025-04-09 09:06:30.754613 | orchestrator | changed: [testbed-manager] 2025-04-09 09:06:30.755931 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:06:30.755965 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:06:30.756340 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:06:30.757397 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:06:30.761385 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:06:30.762745 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:06:30.762768 | orchestrator | 2025-04-09 09:06:30.762788 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-09 09:06:30.764323 | orchestrator | 2025-04-09 09:06:30.765312 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-09 09:06:30.766119 | orchestrator | Wednesday 09 April 2025 09:06:30 +0000 (0:00:01.317) 0:08:01.270 ******* 2025-04-09 09:06:31.837046 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:06:31.838713 | orchestrator | 2025-04-09 09:06:32.730413 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-09 09:06:32.730522 | orchestrator | Wednesday 09 April 2025 09:06:31 +0000 (0:00:01.081) 0:08:02.351 ******* 2025-04-09 09:06:32.730554 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:32.730791 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:32.732367 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:32.733348 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:32.734589 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:32.734975 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:32.735470 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:32.735967 | orchestrator | 2025-04-09 09:06:32.737054 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-09 09:06:32.737247 | orchestrator | Wednesday 09 April 2025 09:06:32 +0000 (0:00:00.893) 0:08:03.244 ******* 2025-04-09 09:06:33.969470 | orchestrator | changed: [testbed-manager] 2025-04-09 09:06:33.973478 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:06:33.973549 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:06:33.974670 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:06:33.975686 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:06:33.977340 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:06:33.978101 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:06:33.979443 | orchestrator | 2025-04-09 09:06:33.980524 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-09 09:06:33.981344 | orchestrator | Wednesday 09 April 2025 09:06:33 +0000 (0:00:01.235) 0:08:04.479 ******* 2025-04-09 09:06:35.009946 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:06:35.011118 | orchestrator | 2025-04-09 09:06:35.012299 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-09 09:06:35.012925 | orchestrator | Wednesday 09 April 2025 09:06:35 +0000 (0:00:01.045) 0:08:05.525 ******* 2025-04-09 09:06:35.449517 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:35.908263 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:35.910211 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:35.911287 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:35.911933 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:35.912967 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:35.913801 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:35.914863 | orchestrator | 2025-04-09 09:06:35.915546 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-09 09:06:35.916213 | orchestrator | Wednesday 09 April 2025 09:06:35 +0000 (0:00:00.899) 0:08:06.424 ******* 2025-04-09 09:06:37.049675 | orchestrator | changed: [testbed-manager] 2025-04-09 09:06:37.049834 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:06:37.050309 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:06:37.050928 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:06:37.051506 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:06:37.053436 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:06:37.054437 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:06:37.055105 | orchestrator | 2025-04-09 09:06:37.056306 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:06:37.056676 | orchestrator | 2025-04-09 09:06:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:06:37.057391 | orchestrator | 2025-04-09 09:06:37 | INFO  | Please wait and do not abort execution. 2025-04-09 09:06:37.058131 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-09 09:06:37.058573 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-09 09:06:37.059702 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-09 09:06:37.060041 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-09 09:06:37.060850 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-09 09:06:37.061332 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-09 09:06:37.062206 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-09 09:06:37.062569 | orchestrator | 2025-04-09 09:06:37.063120 | orchestrator | 2025-04-09 09:06:37.064013 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:06:37.064538 | orchestrator | Wednesday 09 April 2025 09:06:37 +0000 (0:00:01.141) 0:08:07.566 ******* 2025-04-09 09:06:37.064935 | orchestrator | =============================================================================== 2025-04-09 09:06:37.065653 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.51s 2025-04-09 09:06:37.066119 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.63s 2025-04-09 09:06:37.066938 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.96s 2025-04-09 09:06:37.067321 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.92s 2025-04-09 09:06:37.068149 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.57s 2025-04-09 09:06:37.068649 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.46s 2025-04-09 09:06:37.069651 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.44s 2025-04-09 09:06:37.070386 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.16s 2025-04-09 09:06:37.070954 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.02s 2025-04-09 09:06:37.072812 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.97s 2025-04-09 09:06:37.073493 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.83s 2025-04-09 09:06:37.074320 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.01s 2025-04-09 09:06:37.075579 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.66s 2025-04-09 09:06:37.076256 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.39s 2025-04-09 09:06:37.077731 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.24s 2025-04-09 09:06:37.078146 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.03s 2025-04-09 09:06:37.078911 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 6.34s 2025-04-09 09:06:37.079340 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.87s 2025-04-09 09:06:37.079789 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.77s 2025-04-09 09:06:37.080505 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.72s 2025-04-09 09:06:38.053719 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-09 09:06:40.418534 | orchestrator | + osism apply network 2025-04-09 09:06:40.418636 | orchestrator | 2025-04-09 09:06:40 | INFO  | Task 57daf1b4-306c-4d41-b4b8-02291d88ff53 (network) was prepared for execution. 2025-04-09 09:06:44.894867 | orchestrator | 2025-04-09 09:06:40 | INFO  | It takes a moment until task 57daf1b4-306c-4d41-b4b8-02291d88ff53 (network) has been started and output is visible here. 2025-04-09 09:06:44.895012 | orchestrator | 2025-04-09 09:06:44.896472 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-09 09:06:44.896625 | orchestrator | 2025-04-09 09:06:44.897733 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-09 09:06:44.898276 | orchestrator | Wednesday 09 April 2025 09:06:44 +0000 (0:00:00.284) 0:00:00.284 ******* 2025-04-09 09:06:45.052646 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:45.137406 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:45.216637 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:45.294965 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:45.519772 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:45.669225 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:45.669353 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:45.669397 | orchestrator | 2025-04-09 09:06:45.670183 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-09 09:06:45.671235 | orchestrator | Wednesday 09 April 2025 09:06:45 +0000 (0:00:00.770) 0:00:01.054 ******* 2025-04-09 09:06:46.926348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:06:46.928361 | orchestrator | 2025-04-09 09:06:46.928988 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-09 09:06:46.929891 | orchestrator | Wednesday 09 April 2025 09:06:46 +0000 (0:00:01.259) 0:00:02.314 ******* 2025-04-09 09:06:48.889507 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:48.891000 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:48.891049 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:48.892364 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:48.893979 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:48.894638 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:48.896070 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:48.897260 | orchestrator | 2025-04-09 09:06:48.898424 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-09 09:06:48.899197 | orchestrator | Wednesday 09 April 2025 09:06:48 +0000 (0:00:01.964) 0:00:04.278 ******* 2025-04-09 09:06:50.654443 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:06:50.655017 | orchestrator | ok: [testbed-manager] 2025-04-09 09:06:50.656192 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:06:50.657033 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:06:50.677588 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:06:50.679047 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:06:50.679071 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:06:50.679501 | orchestrator | 2025-04-09 09:06:50.681907 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-09 09:06:51.238523 | orchestrator | Wednesday 09 April 2025 09:06:50 +0000 (0:00:01.760) 0:00:06.038 ******* 2025-04-09 09:06:51.238640 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-09 09:06:51.238709 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-09 09:06:51.238729 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-09 09:06:51.716198 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-09 09:06:51.717541 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-09 09:06:51.718572 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-09 09:06:51.721494 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-09 09:06:55.887665 | orchestrator | 2025-04-09 09:06:55.887785 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-09 09:06:55.887802 | orchestrator | Wednesday 09 April 2025 09:06:51 +0000 (0:00:01.068) 0:00:07.107 ******* 2025-04-09 09:06:55.887830 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-09 09:06:55.889262 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-09 09:06:55.890246 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-09 09:06:55.891518 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-09 09:06:55.893395 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-09 09:06:55.894517 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-09 09:06:55.895583 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-09 09:06:55.896439 | orchestrator | 2025-04-09 09:06:55.897320 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-09 09:06:55.898037 | orchestrator | Wednesday 09 April 2025 09:06:55 +0000 (0:00:04.170) 0:00:11.278 ******* 2025-04-09 09:06:57.735678 | orchestrator | changed: [testbed-manager] 2025-04-09 09:06:57.736856 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:06:57.738799 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:06:57.738923 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:06:57.740281 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:06:57.740748 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:06:57.741662 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:06:57.742512 | orchestrator | 2025-04-09 09:06:57.746263 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-09 09:06:59.635730 | orchestrator | Wednesday 09 April 2025 09:06:57 +0000 (0:00:01.845) 0:00:13.123 ******* 2025-04-09 09:06:59.635859 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-09 09:06:59.636453 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-09 09:06:59.637824 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-09 09:06:59.640348 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-09 09:06:59.641112 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-09 09:06:59.641144 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-09 09:06:59.642102 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-09 09:06:59.642830 | orchestrator | 2025-04-09 09:06:59.643638 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-09 09:06:59.644255 | orchestrator | Wednesday 09 April 2025 09:06:59 +0000 (0:00:01.903) 0:00:15.026 ******* 2025-04-09 09:07:00.093833 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:00.445511 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:07:00.893339 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:07:00.894422 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:07:00.896384 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:07:00.899881 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:07:00.901155 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:07:00.902264 | orchestrator | 2025-04-09 09:07:00.903256 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-09 09:07:00.904343 | orchestrator | Wednesday 09 April 2025 09:07:00 +0000 (0:00:01.251) 0:00:16.278 ******* 2025-04-09 09:07:01.116717 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:07:01.208233 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:07:01.296065 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:07:01.385976 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:07:01.469659 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:07:01.627654 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:07:01.628670 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:07:01.630109 | orchestrator | 2025-04-09 09:07:01.631641 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-09 09:07:01.634390 | orchestrator | Wednesday 09 April 2025 09:07:01 +0000 (0:00:00.736) 0:00:17.014 ******* 2025-04-09 09:07:03.884897 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:07:03.885449 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:03.886142 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:07:03.886564 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:07:03.888950 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:07:03.890154 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:07:03.890804 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:07:03.891968 | orchestrator | 2025-04-09 09:07:03.892878 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-09 09:07:03.893990 | orchestrator | Wednesday 09 April 2025 09:07:03 +0000 (0:00:02.255) 0:00:19.269 ******* 2025-04-09 09:07:04.149026 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:07:04.234936 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:07:04.320457 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:07:04.406739 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:07:04.825090 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:07:04.826116 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:07:04.826938 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-09 09:07:04.827970 | orchestrator | 2025-04-09 09:07:04.832117 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-09 09:07:06.616694 | orchestrator | Wednesday 09 April 2025 09:07:04 +0000 (0:00:00.948) 0:00:20.218 ******* 2025-04-09 09:07:06.616811 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:06.620746 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:07:06.622288 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:07:06.624197 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:07:06.624958 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:07:06.626509 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:07:06.627633 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:07:06.629201 | orchestrator | 2025-04-09 09:07:06.630702 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-09 09:07:06.631825 | orchestrator | Wednesday 09 April 2025 09:07:06 +0000 (0:00:01.782) 0:00:22.001 ******* 2025-04-09 09:07:07.975366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:07:07.976612 | orchestrator | 2025-04-09 09:07:07.980076 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-09 09:07:07.981451 | orchestrator | Wednesday 09 April 2025 09:07:07 +0000 (0:00:01.360) 0:00:23.361 ******* 2025-04-09 09:07:08.875263 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:09.341413 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:07:09.344345 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:07:09.346188 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:07:09.347662 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:07:09.349243 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:07:09.350437 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:07:09.352226 | orchestrator | 2025-04-09 09:07:09.352650 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-09 09:07:09.353762 | orchestrator | Wednesday 09 April 2025 09:07:09 +0000 (0:00:01.366) 0:00:24.728 ******* 2025-04-09 09:07:09.521631 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:09.608636 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:07:09.699434 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:07:09.784680 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:07:09.889984 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:07:10.038496 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:07:10.039051 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:07:10.039977 | orchestrator | 2025-04-09 09:07:10.040861 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-09 09:07:10.044299 | orchestrator | Wednesday 09 April 2025 09:07:10 +0000 (0:00:00.702) 0:00:25.430 ******* 2025-04-09 09:07:10.791395 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-09 09:07:10.801360 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-09 09:07:10.803298 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-09 09:07:10.803330 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-09 09:07:10.804734 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-09 09:07:10.805531 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-09 09:07:10.808817 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-09 09:07:10.809452 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-09 09:07:10.809978 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-09 09:07:10.901359 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-09 09:07:11.385915 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-09 09:07:11.387494 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-09 09:07:11.388011 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-09 09:07:11.389589 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-09 09:07:11.390811 | orchestrator | 2025-04-09 09:07:11.392021 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-09 09:07:11.392804 | orchestrator | Wednesday 09 April 2025 09:07:11 +0000 (0:00:01.341) 0:00:26.771 ******* 2025-04-09 09:07:11.571820 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:07:11.667926 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:07:11.768762 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:07:11.858847 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:07:11.943630 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:07:12.083575 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:07:12.084601 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:07:12.086324 | orchestrator | 2025-04-09 09:07:12.087733 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-04-09 09:07:12.088712 | orchestrator | Wednesday 09 April 2025 09:07:12 +0000 (0:00:00.701) 0:00:27.473 ******* 2025-04-09 09:07:15.974353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-3, testbed-node-4, testbed-node-2, testbed-node-5 2025-04-09 09:07:15.974996 | orchestrator | 2025-04-09 09:07:15.976862 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-04-09 09:07:15.979895 | orchestrator | Wednesday 09 April 2025 09:07:15 +0000 (0:00:03.886) 0:00:31.360 ******* 2025-04-09 09:07:21.070078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:21.072062 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:21.073847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:21.073888 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:21.076002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:21.076995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:21.078141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:21.079068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:21.079684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:21.080530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:21.081317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:21.082207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:21.082939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:21.083619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:21.084265 | orchestrator | 2025-04-09 09:07:21.084766 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-04-09 09:07:21.085506 | orchestrator | Wednesday 09 April 2025 09:07:21 +0000 (0:00:05.100) 0:00:36.460 ******* 2025-04-09 09:07:27.002619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:27.003848 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:27.005985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:27.009229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:27.009731 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:27.010805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:27.010835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:27.011799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:27.011828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:27.012575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-04-09 09:07:27.013006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:27.013035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:27.013856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:27.014180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-04-09 09:07:27.014870 | orchestrator | 2025-04-09 09:07:27.015684 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-04-09 09:07:27.016016 | orchestrator | Wednesday 09 April 2025 09:07:26 +0000 (0:00:05.929) 0:00:42.390 ******* 2025-04-09 09:07:28.375949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:07:28.379217 | orchestrator | 2025-04-09 09:07:28.850449 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-09 09:07:28.850526 | orchestrator | Wednesday 09 April 2025 09:07:28 +0000 (0:00:01.372) 0:00:43.762 ******* 2025-04-09 09:07:28.850554 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:28.939684 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:07:29.390880 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:07:29.391420 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:07:29.392546 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:07:29.397032 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:07:29.397695 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:07:29.397731 | orchestrator | 2025-04-09 09:07:29.398100 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-09 09:07:29.398908 | orchestrator | Wednesday 09 April 2025 09:07:29 +0000 (0:00:01.019) 0:00:44.782 ******* 2025-04-09 09:07:29.494363 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-09 09:07:29.495051 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-09 09:07:29.496078 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-09 09:07:29.496407 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-09 09:07:29.596995 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-09 09:07:29.597894 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-09 09:07:29.599028 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-09 09:07:29.600124 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-09 09:07:29.700800 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:07:29.701770 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-09 09:07:29.702643 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-09 09:07:29.703014 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-09 09:07:29.703734 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-09 09:07:30.025946 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:07:30.027446 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-09 09:07:30.031621 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-09 09:07:30.129809 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-09 09:07:30.129860 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-09 09:07:30.129883 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:07:30.130245 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-09 09:07:30.130846 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-09 09:07:30.131538 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-09 09:07:30.257507 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:07:30.259366 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-09 09:07:30.260384 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-09 09:07:30.264267 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-09 09:07:30.266797 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-09 09:07:30.268402 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-09 09:07:31.609612 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:07:31.609793 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:07:31.611706 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-04-09 09:07:31.612275 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-04-09 09:07:31.614106 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-04-09 09:07:31.615012 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-04-09 09:07:31.616146 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:07:31.616824 | orchestrator | 2025-04-09 09:07:31.617678 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-04-09 09:07:31.618860 | orchestrator | Wednesday 09 April 2025 09:07:31 +0000 (0:00:02.214) 0:00:46.996 ******* 2025-04-09 09:07:31.775373 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:07:31.863254 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:07:31.950692 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:07:32.037455 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:07:32.126953 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:07:32.471488 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:07:32.472763 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:07:32.473670 | orchestrator | 2025-04-09 09:07:32.474572 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-09 09:07:32.475432 | orchestrator | Wednesday 09 April 2025 09:07:32 +0000 (0:00:00.860) 0:00:47.856 ******* 2025-04-09 09:07:32.657867 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:07:32.743353 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:07:32.836403 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:07:32.931701 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:07:33.017470 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:07:33.063925 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:07:33.064416 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:07:33.065710 | orchestrator | 2025-04-09 09:07:33.066802 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:07:33.067011 | orchestrator | 2025-04-09 09:07:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:07:33.067148 | orchestrator | 2025-04-09 09:07:33 | INFO  | Please wait and do not abort execution. 2025-04-09 09:07:33.068767 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 09:07:33.069626 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 09:07:33.070347 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 09:07:33.071187 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 09:07:33.071673 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 09:07:33.072634 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 09:07:33.073304 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 09:07:33.074070 | orchestrator | 2025-04-09 09:07:33.074856 | orchestrator | 2025-04-09 09:07:33.075245 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:07:33.075778 | orchestrator | Wednesday 09 April 2025 09:07:33 +0000 (0:00:00.600) 0:00:48.456 ******* 2025-04-09 09:07:33.076798 | orchestrator | =============================================================================== 2025-04-09 09:07:33.077120 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.93s 2025-04-09 09:07:33.077459 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.10s 2025-04-09 09:07:33.077915 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 4.17s 2025-04-09 09:07:33.078532 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.89s 2025-04-09 09:07:33.079191 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2025-04-09 09:07:33.079836 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.21s 2025-04-09 09:07:33.080870 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2025-04-09 09:07:33.081652 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.90s 2025-04-09 09:07:33.083721 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.85s 2025-04-09 09:07:33.084547 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.78s 2025-04-09 09:07:33.085448 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.76s 2025-04-09 09:07:33.086455 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.37s 2025-04-09 09:07:33.087097 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.37s 2025-04-09 09:07:33.087479 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2025-04-09 09:07:33.088109 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2025-04-09 09:07:33.088377 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2025-04-09 09:07:33.088772 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.25s 2025-04-09 09:07:33.089630 | orchestrator | osism.commons.network : Create required directories --------------------- 1.07s 2025-04-09 09:07:33.090318 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2025-04-09 09:07:33.090872 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2025-04-09 09:07:33.883082 | orchestrator | + osism apply wireguard 2025-04-09 09:07:35.582597 | orchestrator | 2025-04-09 09:07:35 | INFO  | Task 1769c423-d0f9-435d-84d1-30a74f9d3031 (wireguard) was prepared for execution. 2025-04-09 09:07:39.718946 | orchestrator | 2025-04-09 09:07:35 | INFO  | It takes a moment until task 1769c423-d0f9-435d-84d1-30a74f9d3031 (wireguard) has been started and output is visible here. 2025-04-09 09:07:39.719070 | orchestrator | 2025-04-09 09:07:39.720356 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-09 09:07:39.720775 | orchestrator | 2025-04-09 09:07:39.721655 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-09 09:07:39.722551 | orchestrator | Wednesday 09 April 2025 09:07:39 +0000 (0:00:00.226) 0:00:00.226 ******* 2025-04-09 09:07:41.371848 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:41.372009 | orchestrator | 2025-04-09 09:07:41.372036 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-09 09:07:41.372609 | orchestrator | Wednesday 09 April 2025 09:07:41 +0000 (0:00:01.657) 0:00:01.884 ******* 2025-04-09 09:07:48.446474 | orchestrator | changed: [testbed-manager] 2025-04-09 09:07:48.446915 | orchestrator | 2025-04-09 09:07:48.447853 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-09 09:07:48.450794 | orchestrator | Wednesday 09 April 2025 09:07:48 +0000 (0:00:07.074) 0:00:08.959 ******* 2025-04-09 09:07:49.021048 | orchestrator | changed: [testbed-manager] 2025-04-09 09:07:49.021844 | orchestrator | 2025-04-09 09:07:49.023549 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-09 09:07:49.023928 | orchestrator | Wednesday 09 April 2025 09:07:49 +0000 (0:00:00.576) 0:00:09.536 ******* 2025-04-09 09:07:49.449835 | orchestrator | changed: [testbed-manager] 2025-04-09 09:07:49.451802 | orchestrator | 2025-04-09 09:07:49.451846 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-09 09:07:50.140492 | orchestrator | Wednesday 09 April 2025 09:07:49 +0000 (0:00:00.427) 0:00:09.963 ******* 2025-04-09 09:07:50.140621 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:50.141773 | orchestrator | 2025-04-09 09:07:50.141809 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-09 09:07:50.142693 | orchestrator | Wednesday 09 April 2025 09:07:50 +0000 (0:00:00.691) 0:00:10.654 ******* 2025-04-09 09:07:50.561553 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:50.562575 | orchestrator | 2025-04-09 09:07:50.564055 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-09 09:07:50.564592 | orchestrator | Wednesday 09 April 2025 09:07:50 +0000 (0:00:00.418) 0:00:11.073 ******* 2025-04-09 09:07:50.996767 | orchestrator | ok: [testbed-manager] 2025-04-09 09:07:50.997697 | orchestrator | 2025-04-09 09:07:50.998466 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-09 09:07:50.999233 | orchestrator | Wednesday 09 April 2025 09:07:50 +0000 (0:00:00.438) 0:00:11.511 ******* 2025-04-09 09:07:52.213625 | orchestrator | changed: [testbed-manager] 2025-04-09 09:07:52.213798 | orchestrator | 2025-04-09 09:07:52.213828 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-09 09:07:52.214136 | orchestrator | Wednesday 09 April 2025 09:07:52 +0000 (0:00:01.216) 0:00:12.727 ******* 2025-04-09 09:07:53.259933 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-09 09:07:53.262658 | orchestrator | changed: [testbed-manager] 2025-04-09 09:07:53.263320 | orchestrator | 2025-04-09 09:07:53.264265 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-09 09:07:53.265373 | orchestrator | Wednesday 09 April 2025 09:07:53 +0000 (0:00:01.044) 0:00:13.772 ******* 2025-04-09 09:07:55.008597 | orchestrator | changed: [testbed-manager] 2025-04-09 09:07:55.009182 | orchestrator | 2025-04-09 09:07:55.009522 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-09 09:07:55.010733 | orchestrator | Wednesday 09 April 2025 09:07:55 +0000 (0:00:01.748) 0:00:15.521 ******* 2025-04-09 09:07:55.954454 | orchestrator | changed: [testbed-manager] 2025-04-09 09:07:55.954988 | orchestrator | 2025-04-09 09:07:55.955710 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:07:55.956244 | orchestrator | 2025-04-09 09:07:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:07:55.956700 | orchestrator | 2025-04-09 09:07:55 | INFO  | Please wait and do not abort execution. 2025-04-09 09:07:55.957089 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:07:55.957501 | orchestrator | 2025-04-09 09:07:55.957894 | orchestrator | 2025-04-09 09:07:55.958880 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:07:55.959521 | orchestrator | Wednesday 09 April 2025 09:07:55 +0000 (0:00:00.948) 0:00:16.470 ******* 2025-04-09 09:07:55.960638 | orchestrator | =============================================================================== 2025-04-09 09:07:55.960994 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.07s 2025-04-09 09:07:55.961559 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.75s 2025-04-09 09:07:55.962065 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2025-04-09 09:07:55.962353 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2025-04-09 09:07:55.963217 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.04s 2025-04-09 09:07:55.963334 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2025-04-09 09:07:55.963740 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.69s 2025-04-09 09:07:55.964228 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-04-09 09:07:55.964594 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-04-09 09:07:55.964987 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-04-09 09:07:55.965400 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2025-04-09 09:07:56.594343 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-09 09:07:56.632254 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-09 09:07:56.715256 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-09 09:07:56.715314 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 180 0 --:--:-- --:--:-- --:--:-- 182 2025-04-09 09:07:56.731821 | orchestrator | + osism apply --environment custom workarounds 2025-04-09 09:07:58.366688 | orchestrator | 2025-04-09 09:07:58 | INFO  | Trying to run play workarounds in environment custom 2025-04-09 09:07:58.430277 | orchestrator | 2025-04-09 09:07:58 | INFO  | Task 8ffd9eb1-59e4-414b-b8ff-8d47dd83bbbc (workarounds) was prepared for execution. 2025-04-09 09:08:02.514752 | orchestrator | 2025-04-09 09:07:58 | INFO  | It takes a moment until task 8ffd9eb1-59e4-414b-b8ff-8d47dd83bbbc (workarounds) has been started and output is visible here. 2025-04-09 09:08:02.514902 | orchestrator | 2025-04-09 09:08:02.516365 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 09:08:02.517926 | orchestrator | 2025-04-09 09:08:02.518688 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-09 09:08:02.520622 | orchestrator | Wednesday 09 April 2025 09:08:02 +0000 (0:00:00.155) 0:00:00.155 ******* 2025-04-09 09:08:02.682382 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-09 09:08:02.766352 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-09 09:08:02.853045 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-09 09:08:02.936287 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-09 09:08:03.150458 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-09 09:08:03.312706 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-09 09:08:03.313976 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-09 09:08:03.316869 | orchestrator | 2025-04-09 09:08:03.317981 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-09 09:08:03.318010 | orchestrator | 2025-04-09 09:08:03.319516 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-09 09:08:03.320361 | orchestrator | Wednesday 09 April 2025 09:08:03 +0000 (0:00:00.803) 0:00:00.959 ******* 2025-04-09 09:08:06.141239 | orchestrator | ok: [testbed-manager] 2025-04-09 09:08:06.141724 | orchestrator | 2025-04-09 09:08:06.142129 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-09 09:08:06.143103 | orchestrator | 2025-04-09 09:08:06.151318 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-09 09:08:08.048245 | orchestrator | Wednesday 09 April 2025 09:08:06 +0000 (0:00:02.821) 0:00:03.781 ******* 2025-04-09 09:08:08.048357 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:08:08.048976 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:08:08.052356 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:08:08.052678 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:08:08.052695 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:08:08.052707 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:08:08.052718 | orchestrator | 2025-04-09 09:08:08.052731 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-09 09:08:08.052746 | orchestrator | 2025-04-09 09:08:08.053418 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-09 09:08:08.053738 | orchestrator | Wednesday 09 April 2025 09:08:08 +0000 (0:00:01.908) 0:00:05.690 ******* 2025-04-09 09:08:09.562918 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-09 09:08:09.564987 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-09 09:08:09.567503 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-09 09:08:09.569277 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-09 09:08:09.569583 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-09 09:08:09.572037 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-09 09:08:09.573018 | orchestrator | 2025-04-09 09:08:09.573847 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-09 09:08:09.574718 | orchestrator | Wednesday 09 April 2025 09:08:09 +0000 (0:00:01.512) 0:00:07.202 ******* 2025-04-09 09:08:13.341802 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:08:13.343105 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:08:13.345216 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:08:13.346154 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:08:13.347353 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:08:13.347685 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:08:13.349071 | orchestrator | 2025-04-09 09:08:13.349719 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-09 09:08:13.350725 | orchestrator | Wednesday 09 April 2025 09:08:13 +0000 (0:00:03.780) 0:00:10.983 ******* 2025-04-09 09:08:13.521943 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:08:13.604700 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:08:13.686956 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:08:13.760656 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:08:14.174258 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:08:14.174408 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:08:14.175168 | orchestrator | 2025-04-09 09:08:14.175747 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-09 09:08:14.176705 | orchestrator | 2025-04-09 09:08:14.177044 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-09 09:08:14.177725 | orchestrator | Wednesday 09 April 2025 09:08:14 +0000 (0:00:00.835) 0:00:11.818 ******* 2025-04-09 09:08:15.867480 | orchestrator | changed: [testbed-manager] 2025-04-09 09:08:15.868510 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:08:15.869502 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:08:15.869533 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:08:15.869826 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:08:15.871538 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:08:15.871934 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:08:15.872966 | orchestrator | 2025-04-09 09:08:15.873311 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-09 09:08:15.873720 | orchestrator | Wednesday 09 April 2025 09:08:15 +0000 (0:00:01.692) 0:00:13.511 ******* 2025-04-09 09:08:17.574537 | orchestrator | changed: [testbed-manager] 2025-04-09 09:08:17.574705 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:08:17.576518 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:08:17.576685 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:08:17.577726 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:08:17.578508 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:08:17.579377 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:08:17.579810 | orchestrator | 2025-04-09 09:08:17.581163 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-09 09:08:17.581801 | orchestrator | Wednesday 09 April 2025 09:08:17 +0000 (0:00:01.702) 0:00:15.214 ******* 2025-04-09 09:08:19.111836 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:08:19.112440 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:08:19.112711 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:08:19.117667 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:08:19.118549 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:08:19.119575 | orchestrator | ok: [testbed-manager] 2025-04-09 09:08:19.120288 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:08:19.122353 | orchestrator | 2025-04-09 09:08:19.122809 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-09 09:08:19.123449 | orchestrator | Wednesday 09 April 2025 09:08:19 +0000 (0:00:01.541) 0:00:16.755 ******* 2025-04-09 09:08:21.063606 | orchestrator | changed: [testbed-manager] 2025-04-09 09:08:21.064493 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:08:21.065318 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:08:21.066387 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:08:21.069493 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:08:21.069555 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:08:21.069572 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:08:21.069585 | orchestrator | 2025-04-09 09:08:21.069603 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-09 09:08:21.070312 | orchestrator | Wednesday 09 April 2025 09:08:21 +0000 (0:00:01.948) 0:00:18.704 ******* 2025-04-09 09:08:21.239351 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:08:21.318109 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:08:21.400234 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:08:21.491656 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:08:21.574089 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:08:21.710709 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:08:21.713415 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:08:21.715475 | orchestrator | 2025-04-09 09:08:21.715580 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-09 09:08:21.715626 | orchestrator | 2025-04-09 09:08:21.715645 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-09 09:08:21.716964 | orchestrator | Wednesday 09 April 2025 09:08:21 +0000 (0:00:00.651) 0:00:19.355 ******* 2025-04-09 09:08:24.338272 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:08:24.338498 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:08:24.339236 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:08:24.340283 | orchestrator | ok: [testbed-manager] 2025-04-09 09:08:24.342623 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:08:24.342700 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:08:24.344581 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:08:24.345308 | orchestrator | 2025-04-09 09:08:24.345709 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:08:24.346744 | orchestrator | 2025-04-09 09:08:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:08:24.346955 | orchestrator | 2025-04-09 09:08:24 | INFO  | Please wait and do not abort execution. 2025-04-09 09:08:24.346984 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:08:24.347635 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:24.347895 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:24.348377 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:24.348874 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:24.349396 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:24.350354 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:24.351347 | orchestrator | 2025-04-09 09:08:24.351997 | orchestrator | 2025-04-09 09:08:24.352378 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:08:24.352906 | orchestrator | Wednesday 09 April 2025 09:08:24 +0000 (0:00:02.624) 0:00:21.979 ******* 2025-04-09 09:08:24.353583 | orchestrator | =============================================================================== 2025-04-09 09:08:24.353831 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.78s 2025-04-09 09:08:24.354296 | orchestrator | Apply netplan configuration --------------------------------------------- 2.82s 2025-04-09 09:08:24.355119 | orchestrator | Install python3-docker -------------------------------------------------- 2.62s 2025-04-09 09:08:24.355422 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.95s 2025-04-09 09:08:24.355962 | orchestrator | Apply netplan configuration --------------------------------------------- 1.91s 2025-04-09 09:08:24.356836 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.70s 2025-04-09 09:08:24.357518 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.69s 2025-04-09 09:08:24.357893 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2025-04-09 09:08:24.358448 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-04-09 09:08:24.358911 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.84s 2025-04-09 09:08:24.359280 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-04-09 09:08:24.359669 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2025-04-09 09:08:25.107461 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-09 09:08:26.800270 | orchestrator | 2025-04-09 09:08:26 | INFO  | Task 173d9526-bdb8-4503-89a1-cd03655f9dc6 (reboot) was prepared for execution. 2025-04-09 09:08:30.882181 | orchestrator | 2025-04-09 09:08:26 | INFO  | It takes a moment until task 173d9526-bdb8-4503-89a1-cd03655f9dc6 (reboot) has been started and output is visible here. 2025-04-09 09:08:30.882350 | orchestrator | 2025-04-09 09:08:30.882933 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-09 09:08:30.885880 | orchestrator | 2025-04-09 09:08:30.886454 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-09 09:08:30.887078 | orchestrator | Wednesday 09 April 2025 09:08:30 +0000 (0:00:00.215) 0:00:00.215 ******* 2025-04-09 09:08:30.996313 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:08:30.996886 | orchestrator | 2025-04-09 09:08:30.997163 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-09 09:08:30.997881 | orchestrator | Wednesday 09 April 2025 09:08:30 +0000 (0:00:00.116) 0:00:00.332 ******* 2025-04-09 09:08:32.070669 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:08:32.071828 | orchestrator | 2025-04-09 09:08:32.073349 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-09 09:08:32.073380 | orchestrator | Wednesday 09 April 2025 09:08:32 +0000 (0:00:01.063) 0:00:01.395 ******* 2025-04-09 09:08:32.179975 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:08:32.180872 | orchestrator | 2025-04-09 09:08:32.182124 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-09 09:08:32.183882 | orchestrator | 2025-04-09 09:08:32.184337 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-09 09:08:32.185636 | orchestrator | Wednesday 09 April 2025 09:08:32 +0000 (0:00:00.118) 0:00:01.514 ******* 2025-04-09 09:08:32.295607 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:08:32.296121 | orchestrator | 2025-04-09 09:08:32.297389 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-09 09:08:32.298352 | orchestrator | Wednesday 09 April 2025 09:08:32 +0000 (0:00:00.116) 0:00:01.631 ******* 2025-04-09 09:08:33.007162 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:08:33.008722 | orchestrator | 2025-04-09 09:08:33.008759 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-09 09:08:33.008782 | orchestrator | Wednesday 09 April 2025 09:08:32 +0000 (0:00:00.707) 0:00:02.338 ******* 2025-04-09 09:08:33.126012 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:08:33.127683 | orchestrator | 2025-04-09 09:08:33.127717 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-09 09:08:33.128127 | orchestrator | 2025-04-09 09:08:33.129313 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-09 09:08:33.129816 | orchestrator | Wednesday 09 April 2025 09:08:33 +0000 (0:00:00.119) 0:00:02.458 ******* 2025-04-09 09:08:33.344890 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:08:33.345655 | orchestrator | 2025-04-09 09:08:33.347391 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-09 09:08:33.348356 | orchestrator | Wednesday 09 April 2025 09:08:33 +0000 (0:00:00.221) 0:00:02.680 ******* 2025-04-09 09:08:34.053183 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:08:34.054132 | orchestrator | 2025-04-09 09:08:34.054185 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-09 09:08:34.054934 | orchestrator | Wednesday 09 April 2025 09:08:34 +0000 (0:00:00.705) 0:00:03.385 ******* 2025-04-09 09:08:34.167649 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:08:34.167951 | orchestrator | 2025-04-09 09:08:34.169130 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-09 09:08:34.169947 | orchestrator | 2025-04-09 09:08:34.170332 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-09 09:08:34.170939 | orchestrator | Wednesday 09 April 2025 09:08:34 +0000 (0:00:00.115) 0:00:03.501 ******* 2025-04-09 09:08:34.274543 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:08:34.275306 | orchestrator | 2025-04-09 09:08:34.276576 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-09 09:08:34.277048 | orchestrator | Wednesday 09 April 2025 09:08:34 +0000 (0:00:00.108) 0:00:03.610 ******* 2025-04-09 09:08:34.992181 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:08:34.994601 | orchestrator | 2025-04-09 09:08:34.994647 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-09 09:08:34.995346 | orchestrator | Wednesday 09 April 2025 09:08:34 +0000 (0:00:00.708) 0:00:04.319 ******* 2025-04-09 09:08:35.110552 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:08:35.111423 | orchestrator | 2025-04-09 09:08:35.112374 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-09 09:08:35.112832 | orchestrator | 2025-04-09 09:08:35.113539 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-09 09:08:35.114122 | orchestrator | Wednesday 09 April 2025 09:08:35 +0000 (0:00:00.125) 0:00:04.444 ******* 2025-04-09 09:08:35.212454 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:08:35.213403 | orchestrator | 2025-04-09 09:08:35.214579 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-09 09:08:35.215854 | orchestrator | Wednesday 09 April 2025 09:08:35 +0000 (0:00:00.103) 0:00:04.548 ******* 2025-04-09 09:08:35.868737 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:08:35.868923 | orchestrator | 2025-04-09 09:08:35.869601 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-09 09:08:35.870485 | orchestrator | Wednesday 09 April 2025 09:08:35 +0000 (0:00:00.655) 0:00:05.204 ******* 2025-04-09 09:08:35.980284 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:08:35.980416 | orchestrator | 2025-04-09 09:08:35.982288 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-09 09:08:35.982543 | orchestrator | 2025-04-09 09:08:35.982571 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-09 09:08:35.983527 | orchestrator | Wednesday 09 April 2025 09:08:35 +0000 (0:00:00.108) 0:00:05.312 ******* 2025-04-09 09:08:36.080679 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:08:36.080810 | orchestrator | 2025-04-09 09:08:36.085140 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-09 09:08:36.085203 | orchestrator | Wednesday 09 April 2025 09:08:36 +0000 (0:00:00.100) 0:00:05.412 ******* 2025-04-09 09:08:36.742346 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:08:36.743866 | orchestrator | 2025-04-09 09:08:36.744532 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-09 09:08:36.744564 | orchestrator | Wednesday 09 April 2025 09:08:36 +0000 (0:00:00.665) 0:00:06.078 ******* 2025-04-09 09:08:36.776066 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:08:36.776757 | orchestrator | 2025-04-09 09:08:36.777311 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:08:36.778061 | orchestrator | 2025-04-09 09:08:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:08:36.778379 | orchestrator | 2025-04-09 09:08:36 | INFO  | Please wait and do not abort execution. 2025-04-09 09:08:36.778407 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:36.781315 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:36.782300 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:36.783396 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:36.783735 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:36.784513 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:08:36.784764 | orchestrator | 2025-04-09 09:08:36.785254 | orchestrator | 2025-04-09 09:08:36.785708 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:08:36.786439 | orchestrator | Wednesday 09 April 2025 09:08:36 +0000 (0:00:00.035) 0:00:06.113 ******* 2025-04-09 09:08:36.786793 | orchestrator | =============================================================================== 2025-04-09 09:08:36.787286 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.51s 2025-04-09 09:08:36.787853 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2025-04-09 09:08:36.788042 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2025-04-09 09:08:37.401850 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-09 09:08:39.079348 | orchestrator | 2025-04-09 09:08:39 | INFO  | Task f2543333-335d-4457-ad88-d95f894e9cfd (wait-for-connection) was prepared for execution. 2025-04-09 09:08:43.125617 | orchestrator | 2025-04-09 09:08:39 | INFO  | It takes a moment until task f2543333-335d-4457-ad88-d95f894e9cfd (wait-for-connection) has been started and output is visible here. 2025-04-09 09:08:43.125764 | orchestrator | 2025-04-09 09:08:43.126463 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-09 09:08:43.132314 | orchestrator | 2025-04-09 09:08:54.767010 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-09 09:08:54.767158 | orchestrator | Wednesday 09 April 2025 09:08:43 +0000 (0:00:00.246) 0:00:00.246 ******* 2025-04-09 09:08:54.767197 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:08:54.767291 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:08:54.767310 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:08:54.767325 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:08:54.767338 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:08:54.767353 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:08:54.767367 | orchestrator | 2025-04-09 09:08:54.767382 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:08:54.767397 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:08:54.767412 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:08:54.767431 | orchestrator | 2025-04-09 09:08:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:08:54.767740 | orchestrator | 2025-04-09 09:08:54 | INFO  | Please wait and do not abort execution. 2025-04-09 09:08:54.767777 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:08:54.768379 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:08:54.770771 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:08:54.771172 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:08:54.771574 | orchestrator | 2025-04-09 09:08:54.771604 | orchestrator | 2025-04-09 09:08:54.772044 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:08:54.772401 | orchestrator | Wednesday 09 April 2025 09:08:54 +0000 (0:00:11.639) 0:00:11.885 ******* 2025-04-09 09:08:54.772439 | orchestrator | =============================================================================== 2025-04-09 09:08:54.772840 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.64s 2025-04-09 09:08:55.436992 | orchestrator | + osism apply hddtemp 2025-04-09 09:08:57.077373 | orchestrator | 2025-04-09 09:08:57 | INFO  | Task d61fad46-e371-4609-aba7-11b7323ce003 (hddtemp) was prepared for execution. 2025-04-09 09:09:01.086692 | orchestrator | 2025-04-09 09:08:57 | INFO  | It takes a moment until task d61fad46-e371-4609-aba7-11b7323ce003 (hddtemp) has been started and output is visible here. 2025-04-09 09:09:01.086810 | orchestrator | 2025-04-09 09:09:01.087462 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-09 09:09:01.087487 | orchestrator | 2025-04-09 09:09:01.087511 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-09 09:09:01.087560 | orchestrator | Wednesday 09 April 2025 09:09:01 +0000 (0:00:00.262) 0:00:00.262 ******* 2025-04-09 09:09:01.238460 | orchestrator | ok: [testbed-manager] 2025-04-09 09:09:01.316362 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:09:01.402679 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:09:01.477447 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:09:01.683179 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:09:01.812182 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:09:01.812905 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:09:01.813687 | orchestrator | 2025-04-09 09:09:01.815234 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-09 09:09:01.815974 | orchestrator | Wednesday 09 April 2025 09:09:01 +0000 (0:00:00.726) 0:00:00.988 ******* 2025-04-09 09:09:03.066713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:09:03.068694 | orchestrator | 2025-04-09 09:09:03.069335 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-09 09:09:03.070158 | orchestrator | Wednesday 09 April 2025 09:09:03 +0000 (0:00:01.245) 0:00:02.233 ******* 2025-04-09 09:09:04.997663 | orchestrator | ok: [testbed-manager] 2025-04-09 09:09:05.000699 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:09:05.000750 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:09:05.001972 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:09:05.003355 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:09:05.004206 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:09:05.006576 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:09:05.008093 | orchestrator | 2025-04-09 09:09:05.009211 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-09 09:09:05.010312 | orchestrator | Wednesday 09 April 2025 09:09:04 +0000 (0:00:01.937) 0:00:04.170 ******* 2025-04-09 09:09:05.558878 | orchestrator | changed: [testbed-manager] 2025-04-09 09:09:05.643147 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:09:05.733094 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:09:06.188409 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:09:06.189422 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:09:06.190751 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:09:06.192533 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:09:06.192611 | orchestrator | 2025-04-09 09:09:06.193744 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-09 09:09:06.194214 | orchestrator | Wednesday 09 April 2025 09:09:06 +0000 (0:00:01.191) 0:00:05.362 ******* 2025-04-09 09:09:07.325936 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:09:07.326230 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:09:07.330566 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:09:07.331612 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:09:07.331643 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:09:07.331663 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:09:07.332412 | orchestrator | ok: [testbed-manager] 2025-04-09 09:09:07.333169 | orchestrator | 2025-04-09 09:09:07.334321 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-09 09:09:07.334648 | orchestrator | Wednesday 09 April 2025 09:09:07 +0000 (0:00:01.140) 0:00:06.502 ******* 2025-04-09 09:09:07.778249 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:09:07.863646 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:09:07.948586 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:09:08.026175 | orchestrator | changed: [testbed-manager] 2025-04-09 09:09:08.151413 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:09:08.151510 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:09:08.151528 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:09:08.151546 | orchestrator | 2025-04-09 09:09:08.151701 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-09 09:09:08.151905 | orchestrator | Wednesday 09 April 2025 09:09:08 +0000 (0:00:00.827) 0:00:07.330 ******* 2025-04-09 09:09:21.069863 | orchestrator | changed: [testbed-manager] 2025-04-09 09:09:21.070144 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:09:21.070175 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:09:21.070190 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:09:21.070204 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:09:21.070253 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:09:21.070355 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:09:21.070990 | orchestrator | 2025-04-09 09:09:21.071478 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-09 09:09:21.072170 | orchestrator | Wednesday 09 April 2025 09:09:21 +0000 (0:00:12.910) 0:00:20.240 ******* 2025-04-09 09:09:22.441819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 09:09:22.443196 | orchestrator | 2025-04-09 09:09:22.444463 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-09 09:09:22.445416 | orchestrator | Wednesday 09 April 2025 09:09:22 +0000 (0:00:01.375) 0:00:21.616 ******* 2025-04-09 09:09:24.413537 | orchestrator | changed: [testbed-node-1] 2025-04-09 09:09:24.414241 | orchestrator | changed: [testbed-manager] 2025-04-09 09:09:24.416577 | orchestrator | changed: [testbed-node-0] 2025-04-09 09:09:24.418251 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:09:24.418994 | orchestrator | changed: [testbed-node-2] 2025-04-09 09:09:24.420311 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:09:24.421080 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:09:24.422175 | orchestrator | 2025-04-09 09:09:24.422916 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:09:24.423386 | orchestrator | 2025-04-09 09:09:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:09:24.424209 | orchestrator | 2025-04-09 09:09:24 | INFO  | Please wait and do not abort execution. 2025-04-09 09:09:24.424963 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:09:24.425967 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:24.426742 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:24.427479 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:24.428033 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:24.428644 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:24.429398 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:24.429600 | orchestrator | 2025-04-09 09:09:24.430059 | orchestrator | 2025-04-09 09:09:24.430471 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:09:24.430879 | orchestrator | Wednesday 09 April 2025 09:09:24 +0000 (0:00:01.973) 0:00:23.589 ******* 2025-04-09 09:09:24.431599 | orchestrator | =============================================================================== 2025-04-09 09:09:24.432113 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.91s 2025-04-09 09:09:24.432384 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2025-04-09 09:09:24.432857 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-04-09 09:09:24.433102 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.38s 2025-04-09 09:09:24.433892 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2025-04-09 09:09:24.434977 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-04-09 09:09:24.435908 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.14s 2025-04-09 09:09:24.436920 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-04-09 09:09:24.437931 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2025-04-09 09:09:25.200466 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-09 09:09:26.682757 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-09 09:09:26.683257 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-09 09:09:26.683320 | orchestrator | + local max_attempts=60 2025-04-09 09:09:26.683335 | orchestrator | + local name=ceph-ansible 2025-04-09 09:09:26.683346 | orchestrator | + local attempt_num=1 2025-04-09 09:09:26.683362 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-09 09:09:26.720960 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-09 09:09:26.721682 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-09 09:09:26.721710 | orchestrator | + local max_attempts=60 2025-04-09 09:09:26.721726 | orchestrator | + local name=kolla-ansible 2025-04-09 09:09:26.721741 | orchestrator | + local attempt_num=1 2025-04-09 09:09:26.721762 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-09 09:09:26.758236 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-09 09:09:26.758977 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-09 09:09:26.759004 | orchestrator | + local max_attempts=60 2025-04-09 09:09:26.759022 | orchestrator | + local name=osism-ansible 2025-04-09 09:09:26.759040 | orchestrator | + local attempt_num=1 2025-04-09 09:09:26.759063 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-09 09:09:26.791583 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-09 09:09:26.791744 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-09 09:09:26.791779 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-09 09:09:26.984200 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-09 09:09:27.152388 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-09 09:09:27.301155 | orchestrator | ARA in osism-ansible already disabled. 2025-04-09 09:09:27.490673 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-09 09:09:27.491790 | orchestrator | + osism apply gather-facts 2025-04-09 09:09:29.159074 | orchestrator | 2025-04-09 09:09:29 | INFO  | Task 1df4642c-bf6c-475d-9524-7873dcd33cc5 (gather-facts) was prepared for execution. 2025-04-09 09:09:33.222930 | orchestrator | 2025-04-09 09:09:29 | INFO  | It takes a moment until task 1df4642c-bf6c-475d-9524-7873dcd33cc5 (gather-facts) has been started and output is visible here. 2025-04-09 09:09:33.223071 | orchestrator | 2025-04-09 09:09:33.223165 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-09 09:09:33.224397 | orchestrator | 2025-04-09 09:09:33.226502 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-09 09:09:33.228141 | orchestrator | Wednesday 09 April 2025 09:09:33 +0000 (0:00:00.253) 0:00:00.253 ******* 2025-04-09 09:09:38.324770 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:09:38.325548 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:09:38.328451 | orchestrator | ok: [testbed-manager] 2025-04-09 09:09:38.329405 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:09:38.329438 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:09:38.329453 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:09:38.329467 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:09:38.329487 | orchestrator | 2025-04-09 09:09:38.329853 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-09 09:09:38.330794 | orchestrator | 2025-04-09 09:09:38.331237 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-09 09:09:38.331625 | orchestrator | Wednesday 09 April 2025 09:09:38 +0000 (0:00:05.104) 0:00:05.357 ******* 2025-04-09 09:09:38.475651 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:09:38.561106 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:09:38.641888 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:09:38.721979 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:09:38.799810 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:09:38.840595 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:09:38.841389 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:09:38.843388 | orchestrator | 2025-04-09 09:09:38.844511 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:09:38.844561 | orchestrator | 2025-04-09 09:09:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:09:38.845571 | orchestrator | 2025-04-09 09:09:38 | INFO  | Please wait and do not abort execution. 2025-04-09 09:09:38.845613 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:38.846484 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:38.847172 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:38.847496 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:38.847854 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:38.848582 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:38.848843 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-09 09:09:38.849208 | orchestrator | 2025-04-09 09:09:38.849595 | orchestrator | 2025-04-09 09:09:38.850143 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:09:38.850383 | orchestrator | Wednesday 09 April 2025 09:09:38 +0000 (0:00:00.518) 0:00:05.875 ******* 2025-04-09 09:09:38.850773 | orchestrator | =============================================================================== 2025-04-09 09:09:38.851721 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.10s 2025-04-09 09:09:38.852762 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-04-09 09:09:39.499358 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-09 09:09:39.511271 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-09 09:09:39.523357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-09 09:09:39.542007 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-09 09:09:39.551951 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-09 09:09:39.562175 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-09 09:09:39.571517 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-09 09:09:39.590185 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-09 09:09:39.609159 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-09 09:09:39.622129 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-09 09:09:39.634738 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-09 09:09:39.650479 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-09 09:09:39.669420 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-09 09:09:39.690802 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-09 09:09:39.711898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-09 09:09:39.731119 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-09 09:09:39.750204 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-09 09:09:39.763905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-09 09:09:39.782633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-09 09:09:39.802579 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-09 09:09:39.822686 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-09 09:09:40.223915 | orchestrator | changed 2025-04-09 09:09:40.311907 | 2025-04-09 09:09:40.312003 | TASK [Deploy services] 2025-04-09 09:09:40.407412 | orchestrator | skipping: Conditional result was False 2025-04-09 09:09:40.421683 | 2025-04-09 09:09:40.421781 | TASK [Deploy in a nutshell] 2025-04-09 09:09:41.081530 | orchestrator | + set -e 2025-04-09 09:09:41.081721 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-09 09:09:41.081755 | orchestrator | ++ export INTERACTIVE=false 2025-04-09 09:09:41.081773 | orchestrator | ++ INTERACTIVE=false 2025-04-09 09:09:41.081816 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-09 09:09:41.081835 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-09 09:09:41.081851 | orchestrator | + source /opt/manager-vars.sh 2025-04-09 09:09:41.081876 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-09 09:09:41.081900 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-09 09:09:41.081928 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-09 09:09:41.083038 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-09 09:09:41.083064 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-09 09:09:41.083079 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-09 09:09:41.083093 | orchestrator | ++ export MANAGER_VERSION=latest 2025-04-09 09:09:41.083108 | orchestrator | ++ MANAGER_VERSION=latest 2025-04-09 09:09:41.083123 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-09 09:09:41.083138 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-09 09:09:41.083152 | orchestrator | ++ export ARA=false 2025-04-09 09:09:41.083166 | orchestrator | ++ ARA=false 2025-04-09 09:09:41.083180 | orchestrator | ++ export TEMPEST=false 2025-04-09 09:09:41.083194 | orchestrator | ++ TEMPEST=false 2025-04-09 09:09:41.083208 | orchestrator | ++ export IS_ZUUL=true 2025-04-09 09:09:41.083222 | orchestrator | ++ IS_ZUUL=true 2025-04-09 09:09:41.083236 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-04-09 09:09:41.083251 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-04-09 09:09:41.083265 | orchestrator | ++ export EXTERNAL_API=false 2025-04-09 09:09:41.083279 | orchestrator | ++ EXTERNAL_API=false 2025-04-09 09:09:41.083315 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-09 09:09:41.083331 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-09 09:09:41.083345 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-09 09:09:41.083359 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-09 09:09:41.083373 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-09 09:09:41.083394 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-09 09:09:41.083409 | orchestrator | + echo 2025-04-09 09:09:41.083423 | orchestrator | 2025-04-09 09:09:41.083437 | orchestrator | # PULL IMAGES 2025-04-09 09:09:41.083451 | orchestrator | 2025-04-09 09:09:41.083465 | orchestrator | + echo '# PULL IMAGES' 2025-04-09 09:09:41.083479 | orchestrator | + echo 2025-04-09 09:09:41.083498 | orchestrator | ++ semver latest 7.0.0 2025-04-09 09:09:41.144350 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-09 09:09:42.739495 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-04-09 09:09:42.739600 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-09 09:09:42.739650 | orchestrator | 2025-04-09 09:09:42 | INFO  | Trying to run play pull-images in environment custom 2025-04-09 09:09:42.803895 | orchestrator | 2025-04-09 09:09:42 | INFO  | Task 60ba8515-a498-49e5-a237-f9358843c97d (pull-images) was prepared for execution. 2025-04-09 09:09:46.867167 | orchestrator | 2025-04-09 09:09:42 | INFO  | It takes a moment until task 60ba8515-a498-49e5-a237-f9358843c97d (pull-images) has been started and output is visible here. 2025-04-09 09:09:46.867287 | orchestrator | 2025-04-09 09:09:46.867428 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-09 09:09:46.870882 | orchestrator | 2025-04-09 09:09:46.871189 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-09 09:09:46.872217 | orchestrator | Wednesday 09 April 2025 09:09:46 +0000 (0:00:00.172) 0:00:00.172 ******* 2025-04-09 09:10:44.624531 | orchestrator | changed: [testbed-manager] 2025-04-09 09:11:40.298499 | orchestrator | 2025-04-09 09:11:40.298638 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-09 09:11:40.298660 | orchestrator | Wednesday 09 April 2025 09:10:44 +0000 (0:00:57.755) 0:00:57.928 ******* 2025-04-09 09:11:40.298693 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-09 09:11:40.301577 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-09 09:11:40.301608 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-09 09:11:40.301629 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-09 09:11:40.301655 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-09 09:11:40.301671 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-09 09:11:40.301729 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-09 09:11:40.301858 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-09 09:11:40.301923 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-09 09:11:40.302471 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-09 09:11:40.302900 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-09 09:11:40.303322 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-09 09:11:40.303600 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-09 09:11:40.304364 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-09 09:11:40.304572 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-09 09:11:40.305041 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-09 09:11:40.305280 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-09 09:11:40.305939 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-09 09:11:40.308440 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-09 09:11:40.308683 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-09 09:11:40.308969 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-09 09:11:40.309354 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-09 09:11:40.312484 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-09 09:11:40.312590 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-09 09:11:40.315495 | orchestrator | 2025-04-09 09:11:40.316124 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:11:40.316293 | orchestrator | 2025-04-09 09:11:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:11:40.316506 | orchestrator | 2025-04-09 09:11:40 | INFO  | Please wait and do not abort execution. 2025-04-09 09:11:40.316538 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 09:11:40.316774 | orchestrator | 2025-04-09 09:11:40.317254 | orchestrator | 2025-04-09 09:11:40.317622 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:11:40.318102 | orchestrator | Wednesday 09 April 2025 09:11:40 +0000 (0:00:55.675) 0:01:53.603 ******* 2025-04-09 09:11:40.318656 | orchestrator | =============================================================================== 2025-04-09 09:11:40.318880 | orchestrator | Pull keystone image ---------------------------------------------------- 57.76s 2025-04-09 09:11:40.321967 | orchestrator | Pull other images ------------------------------------------------------ 55.68s 2025-04-09 09:11:42.765324 | orchestrator | 2025-04-09 09:11:42 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-09 09:11:42.836817 | orchestrator | 2025-04-09 09:11:42 | INFO  | Task 7be5e0fe-4f87-470e-a7df-9590f5189fcf (wipe-partitions) was prepared for execution. 2025-04-09 09:11:46.972360 | orchestrator | 2025-04-09 09:11:42 | INFO  | It takes a moment until task 7be5e0fe-4f87-470e-a7df-9590f5189fcf (wipe-partitions) has been started and output is visible here. 2025-04-09 09:11:46.972538 | orchestrator | 2025-04-09 09:11:46.972946 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-09 09:11:46.973562 | orchestrator | 2025-04-09 09:11:46.975192 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-09 09:11:46.978100 | orchestrator | Wednesday 09 April 2025 09:11:46 +0000 (0:00:00.132) 0:00:00.132 ******* 2025-04-09 09:11:47.595673 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:11:47.595807 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:11:47.596459 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:11:47.597158 | orchestrator | 2025-04-09 09:11:47.597825 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-09 09:11:47.599678 | orchestrator | Wednesday 09 April 2025 09:11:47 +0000 (0:00:00.624) 0:00:00.757 ******* 2025-04-09 09:11:47.746803 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:11:47.855318 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:11:47.856869 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:11:47.856998 | orchestrator | 2025-04-09 09:11:47.857359 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-09 09:11:47.857562 | orchestrator | Wednesday 09 April 2025 09:11:47 +0000 (0:00:00.256) 0:00:01.013 ******* 2025-04-09 09:11:48.581320 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:11:48.582469 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:11:48.583592 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:11:48.584533 | orchestrator | 2025-04-09 09:11:48.585640 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-09 09:11:48.586566 | orchestrator | Wednesday 09 April 2025 09:11:48 +0000 (0:00:00.731) 0:00:01.745 ******* 2025-04-09 09:11:48.723979 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:11:48.842842 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:11:48.843362 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:11:48.843397 | orchestrator | 2025-04-09 09:11:48.844708 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-09 09:11:48.846008 | orchestrator | Wednesday 09 April 2025 09:11:48 +0000 (0:00:00.256) 0:00:02.002 ******* 2025-04-09 09:11:49.978897 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-09 09:11:49.980762 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-09 09:11:49.982978 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-09 09:11:49.983007 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-09 09:11:49.983023 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-09 09:11:49.983058 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-09 09:11:49.983073 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-09 09:11:49.983127 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-09 09:11:49.983188 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-09 09:11:49.985343 | orchestrator | 2025-04-09 09:11:49.987363 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-09 09:11:49.987517 | orchestrator | Wednesday 09 April 2025 09:11:49 +0000 (0:00:01.140) 0:00:03.142 ******* 2025-04-09 09:11:51.288261 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-09 09:11:51.290378 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-09 09:11:51.293070 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-09 09:11:51.294111 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-09 09:11:51.295726 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-09 09:11:51.296886 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-09 09:11:51.298220 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-09 09:11:51.300583 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-09 09:11:51.303261 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-09 09:11:51.306107 | orchestrator | 2025-04-09 09:11:51.307140 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-09 09:11:51.307170 | orchestrator | Wednesday 09 April 2025 09:11:51 +0000 (0:00:01.306) 0:00:04.450 ******* 2025-04-09 09:11:53.579619 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-09 09:11:53.581526 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-09 09:11:53.583607 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-09 09:11:53.586612 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-09 09:11:53.588363 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-09 09:11:53.589111 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-09 09:11:53.590355 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-09 09:11:53.590883 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-09 09:11:53.591933 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-09 09:11:53.593094 | orchestrator | 2025-04-09 09:11:53.595830 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-09 09:11:53.597301 | orchestrator | Wednesday 09 April 2025 09:11:53 +0000 (0:00:02.292) 0:00:06.742 ******* 2025-04-09 09:11:54.184789 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:11:54.185323 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:11:54.185355 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:11:54.185376 | orchestrator | 2025-04-09 09:11:54.185582 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-09 09:11:54.185959 | orchestrator | Wednesday 09 April 2025 09:11:54 +0000 (0:00:00.603) 0:00:07.345 ******* 2025-04-09 09:11:54.828827 | orchestrator | changed: [testbed-node-3] 2025-04-09 09:11:54.830107 | orchestrator | changed: [testbed-node-5] 2025-04-09 09:11:54.830400 | orchestrator | changed: [testbed-node-4] 2025-04-09 09:11:54.831226 | orchestrator | 2025-04-09 09:11:54.832711 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:11:54.833572 | orchestrator | 2025-04-09 09:11:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:11:54.834276 | orchestrator | 2025-04-09 09:11:54 | INFO  | Please wait and do not abort execution. 2025-04-09 09:11:54.835673 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:11:54.837312 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:11:54.841208 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:11:54.845755 | orchestrator | 2025-04-09 09:11:54.847171 | orchestrator | 2025-04-09 09:11:54.847201 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:11:54.848828 | orchestrator | Wednesday 09 April 2025 09:11:54 +0000 (0:00:00.645) 0:00:07.990 ******* 2025-04-09 09:11:54.848926 | orchestrator | =============================================================================== 2025-04-09 09:11:54.850755 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.29s 2025-04-09 09:11:54.851339 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.31s 2025-04-09 09:11:54.851369 | orchestrator | Check device availability ----------------------------------------------- 1.14s 2025-04-09 09:11:54.851867 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-04-09 09:11:54.852652 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-04-09 09:11:54.853333 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2025-04-09 09:11:54.853828 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-04-09 09:11:54.854205 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-04-09 09:11:54.854631 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-04-09 09:11:57.548277 | orchestrator | 2025-04-09 09:11:57 | INFO  | Task 34facd97-c495-4e31-a702-20590261c3d6 (facts) was prepared for execution. 2025-04-09 09:12:01.712823 | orchestrator | 2025-04-09 09:11:57 | INFO  | It takes a moment until task 34facd97-c495-4e31-a702-20590261c3d6 (facts) has been started and output is visible here. 2025-04-09 09:12:01.712961 | orchestrator | 2025-04-09 09:12:01.713509 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-09 09:12:01.717561 | orchestrator | 2025-04-09 09:12:01.718839 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-09 09:12:01.720032 | orchestrator | Wednesday 09 April 2025 09:12:01 +0000 (0:00:00.276) 0:00:00.276 ******* 2025-04-09 09:12:02.762907 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:12:02.765213 | orchestrator | ok: [testbed-manager] 2025-04-09 09:12:02.766414 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:12:02.766483 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:12:02.768123 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:12:02.769926 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:12:02.771724 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:12:02.775253 | orchestrator | 2025-04-09 09:12:02.923839 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-09 09:12:02.923919 | orchestrator | Wednesday 09 April 2025 09:12:02 +0000 (0:00:01.051) 0:00:01.328 ******* 2025-04-09 09:12:02.923946 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:12:03.005095 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:12:03.091774 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:12:03.187930 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:12:03.270945 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:04.027272 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:04.027996 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:04.031465 | orchestrator | 2025-04-09 09:12:04.032612 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-09 09:12:04.034114 | orchestrator | 2025-04-09 09:12:04.034702 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-09 09:12:04.035391 | orchestrator | Wednesday 09 April 2025 09:12:04 +0000 (0:00:01.267) 0:00:02.596 ******* 2025-04-09 09:12:08.784223 | orchestrator | ok: [testbed-node-0] 2025-04-09 09:12:08.785808 | orchestrator | ok: [testbed-node-2] 2025-04-09 09:12:08.787255 | orchestrator | ok: [testbed-node-1] 2025-04-09 09:12:08.788601 | orchestrator | ok: [testbed-manager] 2025-04-09 09:12:08.790061 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:12:08.790745 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:12:08.793010 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:12:08.793932 | orchestrator | 2025-04-09 09:12:08.795587 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-09 09:12:08.796263 | orchestrator | 2025-04-09 09:12:08.797372 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-09 09:12:08.797635 | orchestrator | Wednesday 09 April 2025 09:12:08 +0000 (0:00:04.757) 0:00:07.353 ******* 2025-04-09 09:12:08.941138 | orchestrator | skipping: [testbed-manager] 2025-04-09 09:12:09.022431 | orchestrator | skipping: [testbed-node-0] 2025-04-09 09:12:09.121687 | orchestrator | skipping: [testbed-node-1] 2025-04-09 09:12:09.200429 | orchestrator | skipping: [testbed-node-2] 2025-04-09 09:12:09.284270 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:09.339764 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:09.340500 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:09.341590 | orchestrator | 2025-04-09 09:12:09.342807 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:12:09.342938 | orchestrator | 2025-04-09 09:12:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:12:09.343952 | orchestrator | 2025-04-09 09:12:09 | INFO  | Please wait and do not abort execution. 2025-04-09 09:12:09.343984 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:12:09.344921 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:12:09.346066 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:12:09.347090 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:12:09.348413 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:12:09.348704 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:12:09.348737 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 09:12:09.351215 | orchestrator | 2025-04-09 09:12:09.352485 | orchestrator | 2025-04-09 09:12:09.352511 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:12:09.352535 | orchestrator | Wednesday 09 April 2025 09:12:09 +0000 (0:00:00.552) 0:00:07.906 ******* 2025-04-09 09:12:09.352866 | orchestrator | =============================================================================== 2025-04-09 09:12:09.353703 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.76s 2025-04-09 09:12:09.354153 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-04-09 09:12:09.354629 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-04-09 09:12:09.355397 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-04-09 09:12:12.881917 | orchestrator | 2025-04-09 09:12:12 | INFO  | Task 1d8306df-8519-4751-ac98-9a8988dee0aa (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-09 09:12:17.658782 | orchestrator | 2025-04-09 09:12:12 | INFO  | It takes a moment until task 1d8306df-8519-4751-ac98-9a8988dee0aa (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-09 09:12:17.658933 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.13 2025-04-09 09:12:18.241648 | orchestrator | 2025-04-09 09:12:18.244688 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-09 09:12:18.244798 | orchestrator | 2025-04-09 09:12:18.244854 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-09 09:12:18.245287 | orchestrator | Wednesday 09 April 2025 09:12:18 +0000 (0:00:00.488) 0:00:00.488 ******* 2025-04-09 09:12:18.499341 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-09 09:12:18.500295 | orchestrator | 2025-04-09 09:12:18.501072 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-09 09:12:18.501782 | orchestrator | Wednesday 09 April 2025 09:12:18 +0000 (0:00:00.263) 0:00:00.751 ******* 2025-04-09 09:12:18.742970 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:12:18.743618 | orchestrator | 2025-04-09 09:12:18.744410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:18.746355 | orchestrator | Wednesday 09 April 2025 09:12:18 +0000 (0:00:00.242) 0:00:00.994 ******* 2025-04-09 09:12:19.546823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-09 09:12:19.548334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-09 09:12:19.549578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-09 09:12:19.550683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-09 09:12:19.551665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-09 09:12:19.552817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-09 09:12:19.553773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-09 09:12:19.554769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-09 09:12:19.555301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-09 09:12:19.556649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-09 09:12:19.557534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-09 09:12:19.557674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-09 09:12:19.557779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-09 09:12:19.558517 | orchestrator | 2025-04-09 09:12:19.559172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:19.560332 | orchestrator | Wednesday 09 April 2025 09:12:19 +0000 (0:00:00.802) 0:00:01.797 ******* 2025-04-09 09:12:19.756541 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:19.756725 | orchestrator | 2025-04-09 09:12:19.757272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:19.758065 | orchestrator | Wednesday 09 April 2025 09:12:19 +0000 (0:00:00.212) 0:00:02.010 ******* 2025-04-09 09:12:19.965863 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:19.966739 | orchestrator | 2025-04-09 09:12:19.970291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:19.971168 | orchestrator | Wednesday 09 April 2025 09:12:19 +0000 (0:00:00.206) 0:00:02.216 ******* 2025-04-09 09:12:20.208583 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:20.208994 | orchestrator | 2025-04-09 09:12:20.209413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:20.210162 | orchestrator | Wednesday 09 April 2025 09:12:20 +0000 (0:00:00.243) 0:00:02.460 ******* 2025-04-09 09:12:20.453250 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:20.454729 | orchestrator | 2025-04-09 09:12:20.456914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:20.660702 | orchestrator | Wednesday 09 April 2025 09:12:20 +0000 (0:00:00.244) 0:00:02.705 ******* 2025-04-09 09:12:20.660806 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:20.661383 | orchestrator | 2025-04-09 09:12:20.661904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:20.663502 | orchestrator | Wednesday 09 April 2025 09:12:20 +0000 (0:00:00.207) 0:00:02.912 ******* 2025-04-09 09:12:20.859224 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:20.860192 | orchestrator | 2025-04-09 09:12:20.860622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:20.861322 | orchestrator | Wednesday 09 April 2025 09:12:20 +0000 (0:00:00.199) 0:00:03.112 ******* 2025-04-09 09:12:21.099715 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:21.100011 | orchestrator | 2025-04-09 09:12:21.101055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:21.101101 | orchestrator | Wednesday 09 April 2025 09:12:21 +0000 (0:00:00.238) 0:00:03.350 ******* 2025-04-09 09:12:21.315750 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:21.315907 | orchestrator | 2025-04-09 09:12:21.317424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:21.318420 | orchestrator | Wednesday 09 April 2025 09:12:21 +0000 (0:00:00.216) 0:00:03.567 ******* 2025-04-09 09:12:22.291986 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b820d287-9f63-4dcd-a7bf-6ad94049faf1) 2025-04-09 09:12:22.294795 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b820d287-9f63-4dcd-a7bf-6ad94049faf1) 2025-04-09 09:12:22.294841 | orchestrator | 2025-04-09 09:12:22.296252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:22.298009 | orchestrator | Wednesday 09 April 2025 09:12:22 +0000 (0:00:00.973) 0:00:04.540 ******* 2025-04-09 09:12:22.770906 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b3569f59-7c0a-49c9-8d23-e5efe9e8038b) 2025-04-09 09:12:22.773000 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b3569f59-7c0a-49c9-8d23-e5efe9e8038b) 2025-04-09 09:12:22.773957 | orchestrator | 2025-04-09 09:12:22.775728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:22.776551 | orchestrator | Wednesday 09 April 2025 09:12:22 +0000 (0:00:00.480) 0:00:05.020 ******* 2025-04-09 09:12:23.222817 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c8b54f09-402c-41ce-aa47-11dce3c4404f) 2025-04-09 09:12:23.224889 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c8b54f09-402c-41ce-aa47-11dce3c4404f) 2025-04-09 09:12:23.225482 | orchestrator | 2025-04-09 09:12:23.228758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:23.231159 | orchestrator | Wednesday 09 April 2025 09:12:23 +0000 (0:00:00.452) 0:00:05.473 ******* 2025-04-09 09:12:23.691959 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4c16418f-b1ea-409a-91f3-2a744e80e58e) 2025-04-09 09:12:23.694879 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4c16418f-b1ea-409a-91f3-2a744e80e58e) 2025-04-09 09:12:23.695902 | orchestrator | 2025-04-09 09:12:23.696570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:23.697531 | orchestrator | Wednesday 09 April 2025 09:12:23 +0000 (0:00:00.466) 0:00:05.940 ******* 2025-04-09 09:12:24.091258 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-09 09:12:24.092696 | orchestrator | 2025-04-09 09:12:24.094423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:24.095849 | orchestrator | Wednesday 09 April 2025 09:12:24 +0000 (0:00:00.396) 0:00:06.337 ******* 2025-04-09 09:12:24.564862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-09 09:12:24.564998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-09 09:12:24.565888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-09 09:12:24.566868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-09 09:12:24.570224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-09 09:12:24.570571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-09 09:12:24.570859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-09 09:12:24.570894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-09 09:12:24.571095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-09 09:12:24.571350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-09 09:12:24.571739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-09 09:12:24.572074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-09 09:12:24.572241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-09 09:12:24.573196 | orchestrator | 2025-04-09 09:12:24.850433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:24.850572 | orchestrator | Wednesday 09 April 2025 09:12:24 +0000 (0:00:00.477) 0:00:06.814 ******* 2025-04-09 09:12:24.850601 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:25.105413 | orchestrator | 2025-04-09 09:12:25.105539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:25.105558 | orchestrator | Wednesday 09 April 2025 09:12:24 +0000 (0:00:00.282) 0:00:07.096 ******* 2025-04-09 09:12:25.105586 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:25.109715 | orchestrator | 2025-04-09 09:12:25.109940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:25.109982 | orchestrator | Wednesday 09 April 2025 09:12:25 +0000 (0:00:00.257) 0:00:07.354 ******* 2025-04-09 09:12:25.390532 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:25.391417 | orchestrator | 2025-04-09 09:12:25.393791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:25.394209 | orchestrator | Wednesday 09 April 2025 09:12:25 +0000 (0:00:00.284) 0:00:07.639 ******* 2025-04-09 09:12:25.679332 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:26.454906 | orchestrator | 2025-04-09 09:12:26.455022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:26.455042 | orchestrator | Wednesday 09 April 2025 09:12:25 +0000 (0:00:00.287) 0:00:07.926 ******* 2025-04-09 09:12:26.455071 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:26.455170 | orchestrator | 2025-04-09 09:12:26.455192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:26.657388 | orchestrator | Wednesday 09 April 2025 09:12:26 +0000 (0:00:00.780) 0:00:08.706 ******* 2025-04-09 09:12:26.657517 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:26.657623 | orchestrator | 2025-04-09 09:12:26.660199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:26.843274 | orchestrator | Wednesday 09 April 2025 09:12:26 +0000 (0:00:00.203) 0:00:08.909 ******* 2025-04-09 09:12:26.843329 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:26.843558 | orchestrator | 2025-04-09 09:12:26.843582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:26.843600 | orchestrator | Wednesday 09 April 2025 09:12:26 +0000 (0:00:00.184) 0:00:09.094 ******* 2025-04-09 09:12:27.048331 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:27.048501 | orchestrator | 2025-04-09 09:12:27.048527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:27.048880 | orchestrator | Wednesday 09 April 2025 09:12:27 +0000 (0:00:00.203) 0:00:09.298 ******* 2025-04-09 09:12:27.665422 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-09 09:12:27.665595 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-09 09:12:27.665622 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-09 09:12:27.666154 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-09 09:12:27.668258 | orchestrator | 2025-04-09 09:12:27.668357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:27.668569 | orchestrator | Wednesday 09 April 2025 09:12:27 +0000 (0:00:00.620) 0:00:09.919 ******* 2025-04-09 09:12:27.865341 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:27.867058 | orchestrator | 2025-04-09 09:12:27.867309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:27.867882 | orchestrator | Wednesday 09 April 2025 09:12:27 +0000 (0:00:00.197) 0:00:10.117 ******* 2025-04-09 09:12:28.090919 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:28.095076 | orchestrator | 2025-04-09 09:12:28.095109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:28.095133 | orchestrator | Wednesday 09 April 2025 09:12:28 +0000 (0:00:00.224) 0:00:10.341 ******* 2025-04-09 09:12:28.300216 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:28.300393 | orchestrator | 2025-04-09 09:12:28.300423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:28.302498 | orchestrator | Wednesday 09 April 2025 09:12:28 +0000 (0:00:00.212) 0:00:10.554 ******* 2025-04-09 09:12:28.528785 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:28.528955 | orchestrator | 2025-04-09 09:12:28.532154 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-09 09:12:28.532189 | orchestrator | Wednesday 09 April 2025 09:12:28 +0000 (0:00:00.227) 0:00:10.781 ******* 2025-04-09 09:12:28.690666 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-09 09:12:28.690738 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-09 09:12:28.691049 | orchestrator | 2025-04-09 09:12:28.692421 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-09 09:12:28.692831 | orchestrator | Wednesday 09 April 2025 09:12:28 +0000 (0:00:00.163) 0:00:10.945 ******* 2025-04-09 09:12:28.809770 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:28.811751 | orchestrator | 2025-04-09 09:12:28.812174 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-09 09:12:28.815741 | orchestrator | Wednesday 09 April 2025 09:12:28 +0000 (0:00:00.118) 0:00:11.063 ******* 2025-04-09 09:12:29.081134 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:29.082101 | orchestrator | 2025-04-09 09:12:29.085420 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-09 09:12:29.085975 | orchestrator | Wednesday 09 April 2025 09:12:29 +0000 (0:00:00.270) 0:00:11.334 ******* 2025-04-09 09:12:29.220608 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:29.221330 | orchestrator | 2025-04-09 09:12:29.221363 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-09 09:12:29.221388 | orchestrator | Wednesday 09 April 2025 09:12:29 +0000 (0:00:00.137) 0:00:11.471 ******* 2025-04-09 09:12:29.368831 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:12:29.371321 | orchestrator | 2025-04-09 09:12:29.375363 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-09 09:12:29.375591 | orchestrator | Wednesday 09 April 2025 09:12:29 +0000 (0:00:00.149) 0:00:11.621 ******* 2025-04-09 09:12:29.577414 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}}) 2025-04-09 09:12:29.579820 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb3b6432-c1e2-58b0-8349-44fe229d54e8'}}) 2025-04-09 09:12:29.581279 | orchestrator | 2025-04-09 09:12:29.583240 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-09 09:12:29.584775 | orchestrator | Wednesday 09 April 2025 09:12:29 +0000 (0:00:00.206) 0:00:11.828 ******* 2025-04-09 09:12:29.753799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}})  2025-04-09 09:12:29.754820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb3b6432-c1e2-58b0-8349-44fe229d54e8'}})  2025-04-09 09:12:29.755601 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:29.756078 | orchestrator | 2025-04-09 09:12:29.756713 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-09 09:12:29.757963 | orchestrator | Wednesday 09 April 2025 09:12:29 +0000 (0:00:00.178) 0:00:12.007 ******* 2025-04-09 09:12:30.007036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}})  2025-04-09 09:12:30.007197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb3b6432-c1e2-58b0-8349-44fe229d54e8'}})  2025-04-09 09:12:30.007259 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:30.010622 | orchestrator | 2025-04-09 09:12:30.011132 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-09 09:12:30.011270 | orchestrator | Wednesday 09 April 2025 09:12:30 +0000 (0:00:00.250) 0:00:12.257 ******* 2025-04-09 09:12:30.170938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}})  2025-04-09 09:12:30.171164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb3b6432-c1e2-58b0-8349-44fe229d54e8'}})  2025-04-09 09:12:30.171197 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:30.171215 | orchestrator | 2025-04-09 09:12:30.171237 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-09 09:12:30.172332 | orchestrator | Wednesday 09 April 2025 09:12:30 +0000 (0:00:00.163) 0:00:12.421 ******* 2025-04-09 09:12:30.312436 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:12:30.314570 | orchestrator | 2025-04-09 09:12:30.434540 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-09 09:12:30.434587 | orchestrator | Wednesday 09 April 2025 09:12:30 +0000 (0:00:00.145) 0:00:12.566 ******* 2025-04-09 09:12:30.434610 | orchestrator | ok: [testbed-node-3] 2025-04-09 09:12:30.436918 | orchestrator | 2025-04-09 09:12:30.437939 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-09 09:12:30.439799 | orchestrator | Wednesday 09 April 2025 09:12:30 +0000 (0:00:00.120) 0:00:12.686 ******* 2025-04-09 09:12:30.562241 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:30.563298 | orchestrator | 2025-04-09 09:12:30.564212 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-09 09:12:30.565437 | orchestrator | Wednesday 09 April 2025 09:12:30 +0000 (0:00:00.128) 0:00:12.815 ******* 2025-04-09 09:12:30.711237 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:30.712489 | orchestrator | 2025-04-09 09:12:30.713418 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-09 09:12:30.716699 | orchestrator | Wednesday 09 April 2025 09:12:30 +0000 (0:00:00.148) 0:00:12.963 ******* 2025-04-09 09:12:30.977227 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:30.977585 | orchestrator | 2025-04-09 09:12:30.981624 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-09 09:12:30.983204 | orchestrator | Wednesday 09 April 2025 09:12:30 +0000 (0:00:00.266) 0:00:13.230 ******* 2025-04-09 09:12:31.117615 | orchestrator | ok: [testbed-node-3] => { 2025-04-09 09:12:31.118849 | orchestrator |  "ceph_osd_devices": { 2025-04-09 09:12:31.118892 | orchestrator |  "sdb": { 2025-04-09 09:12:31.120865 | orchestrator |  "osd_lvm_uuid": "0f870d8c-c6a0-5b48-8905-7c7f5ac74310" 2025-04-09 09:12:31.122385 | orchestrator |  }, 2025-04-09 09:12:31.123227 | orchestrator |  "sdc": { 2025-04-09 09:12:31.126437 | orchestrator |  "osd_lvm_uuid": "fb3b6432-c1e2-58b0-8349-44fe229d54e8" 2025-04-09 09:12:31.126942 | orchestrator |  } 2025-04-09 09:12:31.127689 | orchestrator |  } 2025-04-09 09:12:31.128324 | orchestrator | } 2025-04-09 09:12:31.129075 | orchestrator | 2025-04-09 09:12:31.129449 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-09 09:12:31.131397 | orchestrator | Wednesday 09 April 2025 09:12:31 +0000 (0:00:00.139) 0:00:13.369 ******* 2025-04-09 09:12:31.261039 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:31.263452 | orchestrator | 2025-04-09 09:12:31.264702 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-09 09:12:31.265670 | orchestrator | Wednesday 09 April 2025 09:12:31 +0000 (0:00:00.142) 0:00:13.512 ******* 2025-04-09 09:12:31.424662 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:31.426221 | orchestrator | 2025-04-09 09:12:31.430843 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-09 09:12:31.432055 | orchestrator | Wednesday 09 April 2025 09:12:31 +0000 (0:00:00.164) 0:00:13.676 ******* 2025-04-09 09:12:31.598444 | orchestrator | skipping: [testbed-node-3] 2025-04-09 09:12:31.598956 | orchestrator | 2025-04-09 09:12:31.600260 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-09 09:12:31.601877 | orchestrator | Wednesday 09 April 2025 09:12:31 +0000 (0:00:00.174) 0:00:13.850 ******* 2025-04-09 09:12:31.891354 | orchestrator | changed: [testbed-node-3] => { 2025-04-09 09:12:31.891815 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-09 09:12:31.893311 | orchestrator |  "ceph_osd_devices": { 2025-04-09 09:12:31.896826 | orchestrator |  "sdb": { 2025-04-09 09:12:31.897756 | orchestrator |  "osd_lvm_uuid": "0f870d8c-c6a0-5b48-8905-7c7f5ac74310" 2025-04-09 09:12:31.899030 | orchestrator |  }, 2025-04-09 09:12:31.899856 | orchestrator |  "sdc": { 2025-04-09 09:12:31.905258 | orchestrator |  "osd_lvm_uuid": "fb3b6432-c1e2-58b0-8349-44fe229d54e8" 2025-04-09 09:12:31.907022 | orchestrator |  } 2025-04-09 09:12:31.908515 | orchestrator |  }, 2025-04-09 09:12:31.910642 | orchestrator |  "lvm_volumes": [ 2025-04-09 09:12:31.910986 | orchestrator |  { 2025-04-09 09:12:31.912015 | orchestrator |  "data": "osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310", 2025-04-09 09:12:31.912181 | orchestrator |  "data_vg": "ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310" 2025-04-09 09:12:31.912829 | orchestrator |  }, 2025-04-09 09:12:31.913517 | orchestrator |  { 2025-04-09 09:12:31.913964 | orchestrator |  "data": "osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8", 2025-04-09 09:12:31.914924 | orchestrator |  "data_vg": "ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8" 2025-04-09 09:12:31.915014 | orchestrator |  } 2025-04-09 09:12:31.915395 | orchestrator |  ] 2025-04-09 09:12:31.916076 | orchestrator |  } 2025-04-09 09:12:31.916519 | orchestrator | } 2025-04-09 09:12:31.916998 | orchestrator | 2025-04-09 09:12:31.920122 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-09 09:12:31.920849 | orchestrator | Wednesday 09 April 2025 09:12:31 +0000 (0:00:00.291) 0:00:14.142 ******* 2025-04-09 09:12:34.198652 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-09 09:12:34.199461 | orchestrator | 2025-04-09 09:12:34.199532 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-09 09:12:34.203493 | orchestrator | 2025-04-09 09:12:34.204088 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-09 09:12:34.204660 | orchestrator | Wednesday 09 April 2025 09:12:34 +0000 (0:00:02.305) 0:00:16.447 ******* 2025-04-09 09:12:34.459262 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-09 09:12:34.461098 | orchestrator | 2025-04-09 09:12:34.722803 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-09 09:12:34.722881 | orchestrator | Wednesday 09 April 2025 09:12:34 +0000 (0:00:00.261) 0:00:16.708 ******* 2025-04-09 09:12:34.722908 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:12:34.724279 | orchestrator | 2025-04-09 09:12:34.725706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:34.727295 | orchestrator | Wednesday 09 April 2025 09:12:34 +0000 (0:00:00.266) 0:00:16.974 ******* 2025-04-09 09:12:35.126250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-09 09:12:35.127703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-09 09:12:35.130163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-09 09:12:35.131303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-09 09:12:35.133438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-09 09:12:35.134338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-09 09:12:35.135060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-09 09:12:35.137500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-09 09:12:35.137903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-09 09:12:35.138586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-09 09:12:35.139190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-09 09:12:35.139603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-09 09:12:35.140209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-09 09:12:35.141193 | orchestrator | 2025-04-09 09:12:35.141575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:35.142451 | orchestrator | Wednesday 09 April 2025 09:12:35 +0000 (0:00:00.403) 0:00:17.378 ******* 2025-04-09 09:12:35.339801 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:35.340726 | orchestrator | 2025-04-09 09:12:35.341764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:35.342620 | orchestrator | Wednesday 09 April 2025 09:12:35 +0000 (0:00:00.213) 0:00:17.591 ******* 2025-04-09 09:12:35.582866 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:35.586080 | orchestrator | 2025-04-09 09:12:35.587409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:35.588791 | orchestrator | Wednesday 09 April 2025 09:12:35 +0000 (0:00:00.242) 0:00:17.834 ******* 2025-04-09 09:12:35.796715 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:35.799503 | orchestrator | 2025-04-09 09:12:35.801786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:35.804155 | orchestrator | Wednesday 09 April 2025 09:12:35 +0000 (0:00:00.211) 0:00:18.046 ******* 2025-04-09 09:12:36.521353 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:36.522832 | orchestrator | 2025-04-09 09:12:36.523254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:36.526283 | orchestrator | Wednesday 09 April 2025 09:12:36 +0000 (0:00:00.726) 0:00:18.773 ******* 2025-04-09 09:12:36.746571 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:36.748860 | orchestrator | 2025-04-09 09:12:36.748918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:36.748943 | orchestrator | Wednesday 09 April 2025 09:12:36 +0000 (0:00:00.225) 0:00:18.998 ******* 2025-04-09 09:12:36.976380 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:36.977696 | orchestrator | 2025-04-09 09:12:36.977729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:36.981160 | orchestrator | Wednesday 09 April 2025 09:12:36 +0000 (0:00:00.224) 0:00:19.223 ******* 2025-04-09 09:12:37.219900 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:37.221631 | orchestrator | 2025-04-09 09:12:37.460368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:37.460419 | orchestrator | Wednesday 09 April 2025 09:12:37 +0000 (0:00:00.247) 0:00:19.471 ******* 2025-04-09 09:12:37.460442 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:37.461633 | orchestrator | 2025-04-09 09:12:37.465371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:37.466556 | orchestrator | Wednesday 09 April 2025 09:12:37 +0000 (0:00:00.238) 0:00:19.709 ******* 2025-04-09 09:12:37.893425 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1316f4e2-7b99-46a1-8513-1c51037dcfb5) 2025-04-09 09:12:37.894564 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1316f4e2-7b99-46a1-8513-1c51037dcfb5) 2025-04-09 09:12:37.896312 | orchestrator | 2025-04-09 09:12:37.897525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:37.900816 | orchestrator | Wednesday 09 April 2025 09:12:37 +0000 (0:00:00.435) 0:00:20.144 ******* 2025-04-09 09:12:38.375769 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a277efc-ca83-41bc-9a13-7ec21996cbcf) 2025-04-09 09:12:38.377416 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a277efc-ca83-41bc-9a13-7ec21996cbcf) 2025-04-09 09:12:38.377819 | orchestrator | 2025-04-09 09:12:38.378689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:38.379348 | orchestrator | Wednesday 09 April 2025 09:12:38 +0000 (0:00:00.483) 0:00:20.628 ******* 2025-04-09 09:12:38.827279 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_20a266e5-e4ad-4a38-8cc4-79e311575ecc) 2025-04-09 09:12:38.828419 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_20a266e5-e4ad-4a38-8cc4-79e311575ecc) 2025-04-09 09:12:38.830871 | orchestrator | 2025-04-09 09:12:38.834354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:38.834809 | orchestrator | Wednesday 09 April 2025 09:12:38 +0000 (0:00:00.451) 0:00:21.079 ******* 2025-04-09 09:12:39.253840 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3189fe9-451f-4fc6-9bec-4c0706cd3177) 2025-04-09 09:12:39.258626 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3189fe9-451f-4fc6-9bec-4c0706cd3177) 2025-04-09 09:12:39.259344 | orchestrator | 2025-04-09 09:12:39.260278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:39.263670 | orchestrator | Wednesday 09 April 2025 09:12:39 +0000 (0:00:00.425) 0:00:21.504 ******* 2025-04-09 09:12:39.862417 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-09 09:12:40.836234 | orchestrator | 2025-04-09 09:12:40.836317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:40.836328 | orchestrator | Wednesday 09 April 2025 09:12:39 +0000 (0:00:00.604) 0:00:22.109 ******* 2025-04-09 09:12:40.836346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-09 09:12:40.837825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-09 09:12:40.838645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-09 09:12:40.842346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-09 09:12:40.843083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-09 09:12:40.844096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-09 09:12:40.845348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-09 09:12:40.846177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-09 09:12:40.847735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-09 09:12:40.848424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-09 09:12:40.849136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-09 09:12:40.850459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-09 09:12:40.851057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-09 09:12:40.851621 | orchestrator | 2025-04-09 09:12:40.853280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:40.853695 | orchestrator | Wednesday 09 April 2025 09:12:40 +0000 (0:00:00.977) 0:00:23.087 ******* 2025-04-09 09:12:41.066634 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:41.066791 | orchestrator | 2025-04-09 09:12:41.067316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:41.067819 | orchestrator | Wednesday 09 April 2025 09:12:41 +0000 (0:00:00.231) 0:00:23.319 ******* 2025-04-09 09:12:41.319556 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:41.323232 | orchestrator | 2025-04-09 09:12:41.325024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:41.325056 | orchestrator | Wednesday 09 April 2025 09:12:41 +0000 (0:00:00.252) 0:00:23.571 ******* 2025-04-09 09:12:41.554763 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:41.559085 | orchestrator | 2025-04-09 09:12:41.559537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:41.560304 | orchestrator | Wednesday 09 April 2025 09:12:41 +0000 (0:00:00.232) 0:00:23.804 ******* 2025-04-09 09:12:41.781365 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:41.781543 | orchestrator | 2025-04-09 09:12:41.782074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:41.782246 | orchestrator | Wednesday 09 April 2025 09:12:41 +0000 (0:00:00.225) 0:00:24.029 ******* 2025-04-09 09:12:41.999349 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:41.999637 | orchestrator | 2025-04-09 09:12:42.000056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:42.000455 | orchestrator | Wednesday 09 April 2025 09:12:41 +0000 (0:00:00.222) 0:00:24.252 ******* 2025-04-09 09:12:42.247054 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:42.247394 | orchestrator | 2025-04-09 09:12:42.247429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:42.247707 | orchestrator | Wednesday 09 April 2025 09:12:42 +0000 (0:00:00.244) 0:00:24.497 ******* 2025-04-09 09:12:42.462262 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:42.463706 | orchestrator | 2025-04-09 09:12:42.465517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:42.468592 | orchestrator | Wednesday 09 April 2025 09:12:42 +0000 (0:00:00.216) 0:00:24.713 ******* 2025-04-09 09:12:42.694142 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:42.694225 | orchestrator | 2025-04-09 09:12:42.696167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:42.697955 | orchestrator | Wednesday 09 April 2025 09:12:42 +0000 (0:00:00.230) 0:00:24.943 ******* 2025-04-09 09:12:43.942068 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-09 09:12:43.943293 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-09 09:12:43.944375 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-09 09:12:43.948016 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-09 09:12:43.951187 | orchestrator | 2025-04-09 09:12:43.953303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:43.955611 | orchestrator | Wednesday 09 April 2025 09:12:43 +0000 (0:00:01.249) 0:00:26.193 ******* 2025-04-09 09:12:44.217702 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:44.218152 | orchestrator | 2025-04-09 09:12:44.219169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:44.220577 | orchestrator | Wednesday 09 April 2025 09:12:44 +0000 (0:00:00.275) 0:00:26.468 ******* 2025-04-09 09:12:44.440979 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:44.442623 | orchestrator | 2025-04-09 09:12:44.443257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:44.445097 | orchestrator | Wednesday 09 April 2025 09:12:44 +0000 (0:00:00.224) 0:00:26.692 ******* 2025-04-09 09:12:44.672737 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:44.676518 | orchestrator | 2025-04-09 09:12:44.678932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:44.683751 | orchestrator | Wednesday 09 April 2025 09:12:44 +0000 (0:00:00.228) 0:00:26.921 ******* 2025-04-09 09:12:44.925265 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:44.926013 | orchestrator | 2025-04-09 09:12:44.927596 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-09 09:12:44.932140 | orchestrator | Wednesday 09 April 2025 09:12:44 +0000 (0:00:00.255) 0:00:27.177 ******* 2025-04-09 09:12:45.126286 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-09 09:12:45.126967 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-09 09:12:45.128223 | orchestrator | 2025-04-09 09:12:45.129299 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-09 09:12:45.133003 | orchestrator | Wednesday 09 April 2025 09:12:45 +0000 (0:00:00.201) 0:00:27.378 ******* 2025-04-09 09:12:45.282407 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:45.283241 | orchestrator | 2025-04-09 09:12:45.286963 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-09 09:12:45.287606 | orchestrator | Wednesday 09 April 2025 09:12:45 +0000 (0:00:00.153) 0:00:27.532 ******* 2025-04-09 09:12:45.430525 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:45.431321 | orchestrator | 2025-04-09 09:12:45.432368 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-09 09:12:45.433780 | orchestrator | Wednesday 09 April 2025 09:12:45 +0000 (0:00:00.149) 0:00:27.682 ******* 2025-04-09 09:12:45.599975 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:45.600604 | orchestrator | 2025-04-09 09:12:45.602866 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-09 09:12:45.604100 | orchestrator | Wednesday 09 April 2025 09:12:45 +0000 (0:00:00.166) 0:00:27.849 ******* 2025-04-09 09:12:45.749923 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:12:45.754344 | orchestrator | 2025-04-09 09:12:45.941313 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-09 09:12:45.941417 | orchestrator | Wednesday 09 April 2025 09:12:45 +0000 (0:00:00.151) 0:00:28.000 ******* 2025-04-09 09:12:45.941445 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2dad7-be7b-5062-ac12-4fd441a74994'}}) 2025-04-09 09:12:45.943732 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33569830-f4e2-59af-bd76-781c4d067c52'}}) 2025-04-09 09:12:45.944775 | orchestrator | 2025-04-09 09:12:45.951425 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-09 09:12:45.958579 | orchestrator | Wednesday 09 April 2025 09:12:45 +0000 (0:00:00.188) 0:00:28.188 ******* 2025-04-09 09:12:46.110134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2dad7-be7b-5062-ac12-4fd441a74994'}})  2025-04-09 09:12:46.110545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33569830-f4e2-59af-bd76-781c4d067c52'}})  2025-04-09 09:12:46.111701 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:46.115284 | orchestrator | 2025-04-09 09:12:46.512602 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-09 09:12:46.512688 | orchestrator | Wednesday 09 April 2025 09:12:46 +0000 (0:00:00.171) 0:00:28.360 ******* 2025-04-09 09:12:46.512718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2dad7-be7b-5062-ac12-4fd441a74994'}})  2025-04-09 09:12:46.513728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33569830-f4e2-59af-bd76-781c4d067c52'}})  2025-04-09 09:12:46.514951 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:46.516741 | orchestrator | 2025-04-09 09:12:46.518137 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-09 09:12:46.521018 | orchestrator | Wednesday 09 April 2025 09:12:46 +0000 (0:00:00.401) 0:00:28.762 ******* 2025-04-09 09:12:46.693782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2dad7-be7b-5062-ac12-4fd441a74994'}})  2025-04-09 09:12:46.698961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33569830-f4e2-59af-bd76-781c4d067c52'}})  2025-04-09 09:12:46.708416 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:46.708452 | orchestrator | 2025-04-09 09:12:46.709151 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-09 09:12:46.709380 | orchestrator | Wednesday 09 April 2025 09:12:46 +0000 (0:00:00.181) 0:00:28.943 ******* 2025-04-09 09:12:46.846265 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:12:46.847429 | orchestrator | 2025-04-09 09:12:46.848709 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-09 09:12:46.850082 | orchestrator | Wednesday 09 April 2025 09:12:46 +0000 (0:00:00.152) 0:00:29.096 ******* 2025-04-09 09:12:47.011401 | orchestrator | ok: [testbed-node-4] 2025-04-09 09:12:47.012625 | orchestrator | 2025-04-09 09:12:47.014368 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-09 09:12:47.015354 | orchestrator | Wednesday 09 April 2025 09:12:47 +0000 (0:00:00.164) 0:00:29.260 ******* 2025-04-09 09:12:47.196664 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:47.197765 | orchestrator | 2025-04-09 09:12:47.197795 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-09 09:12:47.199639 | orchestrator | Wednesday 09 April 2025 09:12:47 +0000 (0:00:00.186) 0:00:29.447 ******* 2025-04-09 09:12:47.349787 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:47.351252 | orchestrator | 2025-04-09 09:12:47.352452 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-09 09:12:47.353719 | orchestrator | Wednesday 09 April 2025 09:12:47 +0000 (0:00:00.154) 0:00:29.601 ******* 2025-04-09 09:12:47.524525 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:47.528155 | orchestrator | 2025-04-09 09:12:47.531505 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-09 09:12:47.721389 | orchestrator | Wednesday 09 April 2025 09:12:47 +0000 (0:00:00.171) 0:00:29.773 ******* 2025-04-09 09:12:47.721460 | orchestrator | ok: [testbed-node-4] => { 2025-04-09 09:12:47.723782 | orchestrator |  "ceph_osd_devices": { 2025-04-09 09:12:47.725054 | orchestrator |  "sdb": { 2025-04-09 09:12:47.729275 | orchestrator |  "osd_lvm_uuid": "2af2dad7-be7b-5062-ac12-4fd441a74994" 2025-04-09 09:12:47.730012 | orchestrator |  }, 2025-04-09 09:12:47.730063 | orchestrator |  "sdc": { 2025-04-09 09:12:47.730083 | orchestrator |  "osd_lvm_uuid": "33569830-f4e2-59af-bd76-781c4d067c52" 2025-04-09 09:12:47.730761 | orchestrator |  } 2025-04-09 09:12:47.731286 | orchestrator |  } 2025-04-09 09:12:47.731917 | orchestrator | } 2025-04-09 09:12:47.732565 | orchestrator | 2025-04-09 09:12:47.733249 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-09 09:12:47.733948 | orchestrator | Wednesday 09 April 2025 09:12:47 +0000 (0:00:00.196) 0:00:29.969 ******* 2025-04-09 09:12:47.889687 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:47.891265 | orchestrator | 2025-04-09 09:12:47.892052 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-09 09:12:47.893875 | orchestrator | Wednesday 09 April 2025 09:12:47 +0000 (0:00:00.171) 0:00:30.141 ******* 2025-04-09 09:12:48.097195 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:48.097726 | orchestrator | 2025-04-09 09:12:48.099132 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-09 09:12:48.103472 | orchestrator | Wednesday 09 April 2025 09:12:48 +0000 (0:00:00.205) 0:00:30.346 ******* 2025-04-09 09:12:48.250861 | orchestrator | skipping: [testbed-node-4] 2025-04-09 09:12:48.251162 | orchestrator | 2025-04-09 09:12:48.253016 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-09 09:12:48.254604 | orchestrator | Wednesday 09 April 2025 09:12:48 +0000 (0:00:00.155) 0:00:30.502 ******* 2025-04-09 09:12:48.817753 | orchestrator | changed: [testbed-node-4] => { 2025-04-09 09:12:48.821259 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-09 09:12:48.823718 | orchestrator |  "ceph_osd_devices": { 2025-04-09 09:12:48.825118 | orchestrator |  "sdb": { 2025-04-09 09:12:48.826136 | orchestrator |  "osd_lvm_uuid": "2af2dad7-be7b-5062-ac12-4fd441a74994" 2025-04-09 09:12:48.826856 | orchestrator |  }, 2025-04-09 09:12:48.827691 | orchestrator |  "sdc": { 2025-04-09 09:12:48.828176 | orchestrator |  "osd_lvm_uuid": "33569830-f4e2-59af-bd76-781c4d067c52" 2025-04-09 09:12:48.829624 | orchestrator |  } 2025-04-09 09:12:48.830088 | orchestrator |  }, 2025-04-09 09:12:48.831026 | orchestrator |  "lvm_volumes": [ 2025-04-09 09:12:48.831128 | orchestrator |  { 2025-04-09 09:12:48.832168 | orchestrator |  "data": "osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994", 2025-04-09 09:12:48.833840 | orchestrator |  "data_vg": "ceph-2af2dad7-be7b-5062-ac12-4fd441a74994" 2025-04-09 09:12:48.834103 | orchestrator |  }, 2025-04-09 09:12:48.834145 | orchestrator |  { 2025-04-09 09:12:48.836836 | orchestrator |  "data": "osd-block-33569830-f4e2-59af-bd76-781c4d067c52", 2025-04-09 09:12:48.836980 | orchestrator |  "data_vg": "ceph-33569830-f4e2-59af-bd76-781c4d067c52" 2025-04-09 09:12:48.837213 | orchestrator |  } 2025-04-09 09:12:48.837454 | orchestrator |  ] 2025-04-09 09:12:48.837877 | orchestrator |  } 2025-04-09 09:12:48.838108 | orchestrator | } 2025-04-09 09:12:48.838449 | orchestrator | 2025-04-09 09:12:48.838696 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-09 09:12:48.839086 | orchestrator | Wednesday 09 April 2025 09:12:48 +0000 (0:00:00.561) 0:00:31.064 ******* 2025-04-09 09:12:50.376737 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-09 09:12:50.377318 | orchestrator | 2025-04-09 09:12:50.381545 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-09 09:12:50.382348 | orchestrator | 2025-04-09 09:12:50.383013 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-09 09:12:50.383932 | orchestrator | Wednesday 09 April 2025 09:12:50 +0000 (0:00:01.561) 0:00:32.626 ******* 2025-04-09 09:12:50.670242 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-09 09:12:50.670409 | orchestrator | 2025-04-09 09:12:50.670901 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-09 09:12:50.671414 | orchestrator | Wednesday 09 April 2025 09:12:50 +0000 (0:00:00.296) 0:00:32.922 ******* 2025-04-09 09:12:51.139067 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:12:51.140625 | orchestrator | 2025-04-09 09:12:51.141814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:51.143255 | orchestrator | Wednesday 09 April 2025 09:12:51 +0000 (0:00:00.469) 0:00:33.391 ******* 2025-04-09 09:12:51.611041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-09 09:12:51.612795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-09 09:12:51.613970 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-09 09:12:51.615225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-09 09:12:51.616223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-09 09:12:51.616776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-09 09:12:51.617611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-09 09:12:51.617928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-09 09:12:51.618376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-09 09:12:51.619022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-09 09:12:51.619664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-09 09:12:51.619908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-09 09:12:51.620365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-09 09:12:51.620739 | orchestrator | 2025-04-09 09:12:51.621193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:51.621606 | orchestrator | Wednesday 09 April 2025 09:12:51 +0000 (0:00:00.470) 0:00:33.861 ******* 2025-04-09 09:12:51.857785 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:51.858744 | orchestrator | 2025-04-09 09:12:51.858785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:52.097107 | orchestrator | Wednesday 09 April 2025 09:12:51 +0000 (0:00:00.246) 0:00:34.108 ******* 2025-04-09 09:12:52.097201 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:52.097824 | orchestrator | 2025-04-09 09:12:52.099241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:52.101882 | orchestrator | Wednesday 09 April 2025 09:12:52 +0000 (0:00:00.239) 0:00:34.348 ******* 2025-04-09 09:12:52.310148 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:52.312308 | orchestrator | 2025-04-09 09:12:52.314439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:52.550680 | orchestrator | Wednesday 09 April 2025 09:12:52 +0000 (0:00:00.212) 0:00:34.560 ******* 2025-04-09 09:12:52.550763 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:52.550983 | orchestrator | 2025-04-09 09:12:52.553468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:52.555221 | orchestrator | Wednesday 09 April 2025 09:12:52 +0000 (0:00:00.240) 0:00:34.801 ******* 2025-04-09 09:12:52.786164 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:52.786997 | orchestrator | 2025-04-09 09:12:52.790248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:53.022394 | orchestrator | Wednesday 09 April 2025 09:12:52 +0000 (0:00:00.234) 0:00:35.036 ******* 2025-04-09 09:12:53.022523 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:53.023613 | orchestrator | 2025-04-09 09:12:53.024842 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:53.025369 | orchestrator | Wednesday 09 April 2025 09:12:53 +0000 (0:00:00.236) 0:00:35.273 ******* 2025-04-09 09:12:53.272891 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:53.279709 | orchestrator | 2025-04-09 09:12:53.279793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:53.280712 | orchestrator | Wednesday 09 April 2025 09:12:53 +0000 (0:00:00.247) 0:00:35.521 ******* 2025-04-09 09:12:53.498472 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:53.499550 | orchestrator | 2025-04-09 09:12:53.501005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:53.503760 | orchestrator | Wednesday 09 April 2025 09:12:53 +0000 (0:00:00.229) 0:00:35.751 ******* 2025-04-09 09:12:54.431605 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3cfbe416-4a6d-4367-87e7-69d2ca3c8539) 2025-04-09 09:12:54.431743 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3cfbe416-4a6d-4367-87e7-69d2ca3c8539) 2025-04-09 09:12:54.433262 | orchestrator | 2025-04-09 09:12:54.436027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:55.014862 | orchestrator | Wednesday 09 April 2025 09:12:54 +0000 (0:00:00.931) 0:00:36.682 ******* 2025-04-09 09:12:55.014961 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c4741289-30df-4db6-9178-491638aa0447) 2025-04-09 09:12:55.016241 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c4741289-30df-4db6-9178-491638aa0447) 2025-04-09 09:12:55.016271 | orchestrator | 2025-04-09 09:12:55.017669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:55.018728 | orchestrator | Wednesday 09 April 2025 09:12:55 +0000 (0:00:00.578) 0:00:37.261 ******* 2025-04-09 09:12:55.593197 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80ef2f8b-b45e-4bed-a63c-5dbd52e64749) 2025-04-09 09:12:55.600403 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80ef2f8b-b45e-4bed-a63c-5dbd52e64749) 2025-04-09 09:12:55.601703 | orchestrator | 2025-04-09 09:12:55.612952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:56.254808 | orchestrator | Wednesday 09 April 2025 09:12:55 +0000 (0:00:00.581) 0:00:37.842 ******* 2025-04-09 09:12:56.254947 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d077077e-8074-4e28-961e-4d10ae0af6bd) 2025-04-09 09:12:56.256747 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d077077e-8074-4e28-961e-4d10ae0af6bd) 2025-04-09 09:12:56.261067 | orchestrator | 2025-04-09 09:12:56.261965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 09:12:56.263060 | orchestrator | Wednesday 09 April 2025 09:12:56 +0000 (0:00:00.663) 0:00:38.505 ******* 2025-04-09 09:12:56.620689 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-09 09:12:56.623336 | orchestrator | 2025-04-09 09:12:56.624383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:56.625229 | orchestrator | Wednesday 09 April 2025 09:12:56 +0000 (0:00:00.360) 0:00:38.866 ******* 2025-04-09 09:12:57.164736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-09 09:12:57.165563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-09 09:12:57.167501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-09 09:12:57.169418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-09 09:12:57.170660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-09 09:12:57.173130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-09 09:12:57.174214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-09 09:12:57.175306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-09 09:12:57.176446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-09 09:12:57.177714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-09 09:12:57.178567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-09 09:12:57.179318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-09 09:12:57.180407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-09 09:12:57.181167 | orchestrator | 2025-04-09 09:12:57.181873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:57.182234 | orchestrator | Wednesday 09 April 2025 09:12:57 +0000 (0:00:00.549) 0:00:39.415 ******* 2025-04-09 09:12:57.412738 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:57.414204 | orchestrator | 2025-04-09 09:12:57.415513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:57.417098 | orchestrator | Wednesday 09 April 2025 09:12:57 +0000 (0:00:00.246) 0:00:39.662 ******* 2025-04-09 09:12:57.622148 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:57.622532 | orchestrator | 2025-04-09 09:12:57.623131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:57.624569 | orchestrator | Wednesday 09 April 2025 09:12:57 +0000 (0:00:00.212) 0:00:39.875 ******* 2025-04-09 09:12:57.852280 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:57.853340 | orchestrator | 2025-04-09 09:12:57.856983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:58.732906 | orchestrator | Wednesday 09 April 2025 09:12:57 +0000 (0:00:00.227) 0:00:40.102 ******* 2025-04-09 09:12:58.733027 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:58.745313 | orchestrator | 2025-04-09 09:12:58.746430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:58.747129 | orchestrator | Wednesday 09 April 2025 09:12:58 +0000 (0:00:00.879) 0:00:40.982 ******* 2025-04-09 09:12:58.951365 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:58.952367 | orchestrator | 2025-04-09 09:12:58.952616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:58.954451 | orchestrator | Wednesday 09 April 2025 09:12:58 +0000 (0:00:00.220) 0:00:41.202 ******* 2025-04-09 09:12:59.206602 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:59.207590 | orchestrator | 2025-04-09 09:12:59.208697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:59.210070 | orchestrator | Wednesday 09 April 2025 09:12:59 +0000 (0:00:00.255) 0:00:41.458 ******* 2025-04-09 09:12:59.447263 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:59.452304 | orchestrator | 2025-04-09 09:12:59.697890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:59.697982 | orchestrator | Wednesday 09 April 2025 09:12:59 +0000 (0:00:00.240) 0:00:41.699 ******* 2025-04-09 09:12:59.698089 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:12:59.699192 | orchestrator | 2025-04-09 09:12:59.700305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:12:59.700590 | orchestrator | Wednesday 09 April 2025 09:12:59 +0000 (0:00:00.250) 0:00:41.949 ******* 2025-04-09 09:13:00.385118 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-09 09:13:00.385956 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-09 09:13:00.387600 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-09 09:13:00.390454 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-09 09:13:00.391320 | orchestrator | 2025-04-09 09:13:00.391348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:13:00.391379 | orchestrator | Wednesday 09 April 2025 09:13:00 +0000 (0:00:00.684) 0:00:42.634 ******* 2025-04-09 09:13:00.611688 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:00.614055 | orchestrator | 2025-04-09 09:13:00.615432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:13:00.850289 | orchestrator | Wednesday 09 April 2025 09:13:00 +0000 (0:00:00.227) 0:00:42.861 ******* 2025-04-09 09:13:00.850392 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:00.850449 | orchestrator | 2025-04-09 09:13:00.851622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:13:00.852224 | orchestrator | Wednesday 09 April 2025 09:13:00 +0000 (0:00:00.238) 0:00:43.100 ******* 2025-04-09 09:13:01.082818 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:01.084142 | orchestrator | 2025-04-09 09:13:01.084694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 09:13:01.086358 | orchestrator | Wednesday 09 April 2025 09:13:01 +0000 (0:00:00.233) 0:00:43.333 ******* 2025-04-09 09:13:01.282061 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:01.282206 | orchestrator | 2025-04-09 09:13:01.282232 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-09 09:13:01.283094 | orchestrator | Wednesday 09 April 2025 09:13:01 +0000 (0:00:00.198) 0:00:43.532 ******* 2025-04-09 09:13:01.688606 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-09 09:13:01.689417 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-09 09:13:01.692115 | orchestrator | 2025-04-09 09:13:01.692761 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-09 09:13:01.692791 | orchestrator | Wednesday 09 April 2025 09:13:01 +0000 (0:00:00.406) 0:00:43.939 ******* 2025-04-09 09:13:01.822716 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:01.824319 | orchestrator | 2025-04-09 09:13:01.826834 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-09 09:13:01.980854 | orchestrator | Wednesday 09 April 2025 09:13:01 +0000 (0:00:00.135) 0:00:44.074 ******* 2025-04-09 09:13:01.980914 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:01.981665 | orchestrator | 2025-04-09 09:13:01.982675 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-09 09:13:01.983606 | orchestrator | Wednesday 09 April 2025 09:13:01 +0000 (0:00:00.158) 0:00:44.233 ******* 2025-04-09 09:13:02.149625 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:02.150368 | orchestrator | 2025-04-09 09:13:02.151448 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-09 09:13:02.152313 | orchestrator | Wednesday 09 April 2025 09:13:02 +0000 (0:00:00.168) 0:00:44.402 ******* 2025-04-09 09:13:02.310529 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:13:02.311653 | orchestrator | 2025-04-09 09:13:02.312738 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-09 09:13:02.313785 | orchestrator | Wednesday 09 April 2025 09:13:02 +0000 (0:00:00.160) 0:00:44.562 ******* 2025-04-09 09:13:02.509824 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc0074f0-3710-5c5e-ae84-22c546993d85'}}) 2025-04-09 09:13:02.511229 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}}) 2025-04-09 09:13:02.515227 | orchestrator | 2025-04-09 09:13:02.515511 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-09 09:13:02.515546 | orchestrator | Wednesday 09 April 2025 09:13:02 +0000 (0:00:00.199) 0:00:44.761 ******* 2025-04-09 09:13:02.677074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc0074f0-3710-5c5e-ae84-22c546993d85'}})  2025-04-09 09:13:02.678592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}})  2025-04-09 09:13:02.678656 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:02.682007 | orchestrator | 2025-04-09 09:13:02.682149 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-09 09:13:02.682173 | orchestrator | Wednesday 09 April 2025 09:13:02 +0000 (0:00:00.166) 0:00:44.928 ******* 2025-04-09 09:13:02.893990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc0074f0-3710-5c5e-ae84-22c546993d85'}})  2025-04-09 09:13:02.895558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}})  2025-04-09 09:13:02.898102 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:02.899158 | orchestrator | 2025-04-09 09:13:02.899189 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-09 09:13:02.899341 | orchestrator | Wednesday 09 April 2025 09:13:02 +0000 (0:00:00.216) 0:00:45.145 ******* 2025-04-09 09:13:03.085856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc0074f0-3710-5c5e-ae84-22c546993d85'}})  2025-04-09 09:13:03.086621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}})  2025-04-09 09:13:03.087171 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:03.087424 | orchestrator | 2025-04-09 09:13:03.087726 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-09 09:13:03.089234 | orchestrator | Wednesday 09 April 2025 09:13:03 +0000 (0:00:00.193) 0:00:45.338 ******* 2025-04-09 09:13:03.237023 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:13:03.237143 | orchestrator | 2025-04-09 09:13:03.237477 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-09 09:13:03.238790 | orchestrator | Wednesday 09 April 2025 09:13:03 +0000 (0:00:00.151) 0:00:45.489 ******* 2025-04-09 09:13:03.393153 | orchestrator | ok: [testbed-node-5] 2025-04-09 09:13:03.393826 | orchestrator | 2025-04-09 09:13:03.396114 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-09 09:13:03.398546 | orchestrator | Wednesday 09 April 2025 09:13:03 +0000 (0:00:00.155) 0:00:45.644 ******* 2025-04-09 09:13:03.549703 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:03.550671 | orchestrator | 2025-04-09 09:13:03.552043 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-09 09:13:03.552380 | orchestrator | Wednesday 09 April 2025 09:13:03 +0000 (0:00:00.156) 0:00:45.801 ******* 2025-04-09 09:13:03.925894 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:03.926475 | orchestrator | 2025-04-09 09:13:03.927928 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-09 09:13:03.930376 | orchestrator | Wednesday 09 April 2025 09:13:03 +0000 (0:00:00.376) 0:00:46.177 ******* 2025-04-09 09:13:04.074237 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:04.074343 | orchestrator | 2025-04-09 09:13:04.076601 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-09 09:13:04.077269 | orchestrator | Wednesday 09 April 2025 09:13:04 +0000 (0:00:00.149) 0:00:46.326 ******* 2025-04-09 09:13:04.239886 | orchestrator | ok: [testbed-node-5] => { 2025-04-09 09:13:04.241984 | orchestrator |  "ceph_osd_devices": { 2025-04-09 09:13:04.242933 | orchestrator |  "sdb": { 2025-04-09 09:13:04.244403 | orchestrator |  "osd_lvm_uuid": "dc0074f0-3710-5c5e-ae84-22c546993d85" 2025-04-09 09:13:04.246218 | orchestrator |  }, 2025-04-09 09:13:04.246841 | orchestrator |  "sdc": { 2025-04-09 09:13:04.247654 | orchestrator |  "osd_lvm_uuid": "07984c01-fdec-5bf7-a01d-ec4b418f7e1e" 2025-04-09 09:13:04.248446 | orchestrator |  } 2025-04-09 09:13:04.249144 | orchestrator |  } 2025-04-09 09:13:04.249596 | orchestrator | } 2025-04-09 09:13:04.250236 | orchestrator | 2025-04-09 09:13:04.250987 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-09 09:13:04.251147 | orchestrator | Wednesday 09 April 2025 09:13:04 +0000 (0:00:00.165) 0:00:46.492 ******* 2025-04-09 09:13:04.381322 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:04.381908 | orchestrator | 2025-04-09 09:13:04.382666 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-09 09:13:04.383716 | orchestrator | Wednesday 09 April 2025 09:13:04 +0000 (0:00:00.141) 0:00:46.633 ******* 2025-04-09 09:13:04.534118 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:04.535061 | orchestrator | 2025-04-09 09:13:04.537543 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-09 09:13:04.538555 | orchestrator | Wednesday 09 April 2025 09:13:04 +0000 (0:00:00.152) 0:00:46.786 ******* 2025-04-09 09:13:04.693266 | orchestrator | skipping: [testbed-node-5] 2025-04-09 09:13:04.694139 | orchestrator | 2025-04-09 09:13:04.695009 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-09 09:13:04.696115 | orchestrator | Wednesday 09 April 2025 09:13:04 +0000 (0:00:00.157) 0:00:46.943 ******* 2025-04-09 09:13:04.982325 | orchestrator | changed: [testbed-node-5] => { 2025-04-09 09:13:04.983286 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-09 09:13:04.984120 | orchestrator |  "ceph_osd_devices": { 2025-04-09 09:13:04.984966 | orchestrator |  "sdb": { 2025-04-09 09:13:04.985686 | orchestrator |  "osd_lvm_uuid": "dc0074f0-3710-5c5e-ae84-22c546993d85" 2025-04-09 09:13:04.986718 | orchestrator |  }, 2025-04-09 09:13:04.987353 | orchestrator |  "sdc": { 2025-04-09 09:13:04.987858 | orchestrator |  "osd_lvm_uuid": "07984c01-fdec-5bf7-a01d-ec4b418f7e1e" 2025-04-09 09:13:04.988592 | orchestrator |  } 2025-04-09 09:13:04.989057 | orchestrator |  }, 2025-04-09 09:13:04.989589 | orchestrator |  "lvm_volumes": [ 2025-04-09 09:13:04.990108 | orchestrator |  { 2025-04-09 09:13:04.990560 | orchestrator |  "data": "osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85", 2025-04-09 09:13:04.991066 | orchestrator |  "data_vg": "ceph-dc0074f0-3710-5c5e-ae84-22c546993d85" 2025-04-09 09:13:04.991357 | orchestrator |  }, 2025-04-09 09:13:04.991714 | orchestrator |  { 2025-04-09 09:13:04.992153 | orchestrator |  "data": "osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e", 2025-04-09 09:13:04.992372 | orchestrator |  "data_vg": "ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e" 2025-04-09 09:13:04.992749 | orchestrator |  } 2025-04-09 09:13:04.993105 | orchestrator |  ] 2025-04-09 09:13:04.993395 | orchestrator |  } 2025-04-09 09:13:04.993777 | orchestrator | } 2025-04-09 09:13:04.994187 | orchestrator | 2025-04-09 09:13:04.994590 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-09 09:13:04.994989 | orchestrator | Wednesday 09 April 2025 09:13:04 +0000 (0:00:00.289) 0:00:47.232 ******* 2025-04-09 09:13:06.079242 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-09 09:13:06.082325 | orchestrator | 2025-04-09 09:13:06.082895 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 09:13:06.084220 | orchestrator | 2025-04-09 09:13:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 09:13:06.084702 | orchestrator | 2025-04-09 09:13:06 | INFO  | Please wait and do not abort execution. 2025-04-09 09:13:06.084738 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-09 09:13:06.086156 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-09 09:13:06.087590 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-09 09:13:06.089180 | orchestrator | 2025-04-09 09:13:06.090076 | orchestrator | 2025-04-09 09:13:06.090814 | orchestrator | 2025-04-09 09:13:06.091345 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 09:13:06.092347 | orchestrator | Wednesday 09 April 2025 09:13:06 +0000 (0:00:01.096) 0:00:48.329 ******* 2025-04-09 09:13:06.093093 | orchestrator | =============================================================================== 2025-04-09 09:13:06.093587 | orchestrator | Write configuration file ------------------------------------------------ 4.96s 2025-04-09 09:13:06.093980 | orchestrator | Add known partitions to the list of available block devices ------------- 2.00s 2025-04-09 09:13:06.094713 | orchestrator | Add known links to the list of available block devices ------------------ 1.68s 2025-04-09 09:13:06.095248 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2025-04-09 09:13:06.095724 | orchestrator | Print configuration data ------------------------------------------------ 1.14s 2025-04-09 09:13:06.096356 | orchestrator | Get initial list of available block devices ----------------------------- 0.98s 2025-04-09 09:13:06.096947 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2025-04-09 09:13:06.097447 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2025-04-09 09:13:06.098220 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-04-09 09:13:06.098428 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.87s 2025-04-09 09:13:06.099032 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2025-04-09 09:13:06.099558 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-04-09 09:13:06.099999 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.77s 2025-04-09 09:13:06.100580 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-04-09 09:13:06.101554 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-04-09 09:13:06.101582 | orchestrator | Set WAL devices config data --------------------------------------------- 0.68s 2025-04-09 09:13:06.102128 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-04-09 09:13:06.102445 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-04-09 09:13:06.103198 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-04-09 09:13:06.103310 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.59s 2025-04-09 09:13:18.637781 | orchestrator | 2025-04-09 09:13:18 | INFO  | Task cd1278c0-1c9e-42d1-85db-df04abcc426b is running in background. Output coming soon. 2025-04-09 10:13:21.350950 | orchestrator | 2025-04-09 10:13:21 | INFO  | Task e11fd04e-7b6c-49f9-baef-7ee0a54170f9 (ceph-create-lvm-devices) was prepared for execution. 2025-04-09 10:13:25.067282 | orchestrator | 2025-04-09 10:13:21 | INFO  | It takes a moment until task e11fd04e-7b6c-49f9-baef-7ee0a54170f9 (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-09 10:13:25.067416 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.13 2025-04-09 10:13:25.571400 | orchestrator | 2025-04-09 10:13:25.571557 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-09 10:13:25.573091 | orchestrator | 2025-04-09 10:13:25.574246 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-09 10:13:25.575287 | orchestrator | Wednesday 09 April 2025 10:13:25 +0000 (0:00:00.432) 0:00:00.432 ******* 2025-04-09 10:13:25.817151 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-09 10:13:25.818265 | orchestrator | 2025-04-09 10:13:25.820890 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-09 10:13:25.821876 | orchestrator | Wednesday 09 April 2025 10:13:25 +0000 (0:00:00.246) 0:00:00.678 ******* 2025-04-09 10:13:26.075702 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:26.075849 | orchestrator | 2025-04-09 10:13:26.077044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:26.079264 | orchestrator | Wednesday 09 April 2025 10:13:26 +0000 (0:00:00.259) 0:00:00.938 ******* 2025-04-09 10:13:26.809171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-09 10:13:26.809388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-09 10:13:26.810113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-09 10:13:26.811186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-09 10:13:26.815140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-09 10:13:26.815806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-09 10:13:26.815837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-09 10:13:26.817057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-09 10:13:26.818120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-09 10:13:26.818831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-09 10:13:26.820831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-09 10:13:26.821386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-09 10:13:26.822595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-09 10:13:26.823000 | orchestrator | 2025-04-09 10:13:26.824003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:26.824673 | orchestrator | Wednesday 09 April 2025 10:13:26 +0000 (0:00:00.734) 0:00:01.672 ******* 2025-04-09 10:13:26.999842 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:27.001096 | orchestrator | 2025-04-09 10:13:27.001738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:27.002306 | orchestrator | Wednesday 09 April 2025 10:13:26 +0000 (0:00:00.190) 0:00:01.863 ******* 2025-04-09 10:13:27.198414 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:27.199138 | orchestrator | 2025-04-09 10:13:27.201660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:27.395741 | orchestrator | Wednesday 09 April 2025 10:13:27 +0000 (0:00:00.196) 0:00:02.060 ******* 2025-04-09 10:13:27.395899 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:27.395997 | orchestrator | 2025-04-09 10:13:27.396891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:27.398132 | orchestrator | Wednesday 09 April 2025 10:13:27 +0000 (0:00:00.198) 0:00:02.259 ******* 2025-04-09 10:13:27.610444 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:27.611143 | orchestrator | 2025-04-09 10:13:27.611181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:27.612252 | orchestrator | Wednesday 09 April 2025 10:13:27 +0000 (0:00:00.214) 0:00:02.473 ******* 2025-04-09 10:13:27.809899 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:27.810067 | orchestrator | 2025-04-09 10:13:27.810440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:27.811110 | orchestrator | Wednesday 09 April 2025 10:13:27 +0000 (0:00:00.199) 0:00:02.672 ******* 2025-04-09 10:13:27.996177 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:27.996332 | orchestrator | 2025-04-09 10:13:27.997914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:27.998962 | orchestrator | Wednesday 09 April 2025 10:13:27 +0000 (0:00:00.187) 0:00:02.859 ******* 2025-04-09 10:13:28.196612 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:28.196926 | orchestrator | 2025-04-09 10:13:28.198146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:28.199051 | orchestrator | Wednesday 09 April 2025 10:13:28 +0000 (0:00:00.199) 0:00:03.058 ******* 2025-04-09 10:13:28.398698 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:28.398859 | orchestrator | 2025-04-09 10:13:28.400113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:28.400783 | orchestrator | Wednesday 09 April 2025 10:13:28 +0000 (0:00:00.203) 0:00:03.262 ******* 2025-04-09 10:13:29.239577 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b820d287-9f63-4dcd-a7bf-6ad94049faf1) 2025-04-09 10:13:29.240050 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b820d287-9f63-4dcd-a7bf-6ad94049faf1) 2025-04-09 10:13:29.240081 | orchestrator | 2025-04-09 10:13:29.240545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:29.241053 | orchestrator | Wednesday 09 April 2025 10:13:29 +0000 (0:00:00.839) 0:00:04.101 ******* 2025-04-09 10:13:29.667371 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b3569f59-7c0a-49c9-8d23-e5efe9e8038b) 2025-04-09 10:13:29.668537 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b3569f59-7c0a-49c9-8d23-e5efe9e8038b) 2025-04-09 10:13:29.668906 | orchestrator | 2025-04-09 10:13:29.671989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:29.673183 | orchestrator | Wednesday 09 April 2025 10:13:29 +0000 (0:00:00.429) 0:00:04.530 ******* 2025-04-09 10:13:30.125802 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c8b54f09-402c-41ce-aa47-11dce3c4404f) 2025-04-09 10:13:30.126055 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c8b54f09-402c-41ce-aa47-11dce3c4404f) 2025-04-09 10:13:30.126919 | orchestrator | 2025-04-09 10:13:30.127819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:30.129016 | orchestrator | Wednesday 09 April 2025 10:13:30 +0000 (0:00:00.457) 0:00:04.988 ******* 2025-04-09 10:13:30.576982 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4c16418f-b1ea-409a-91f3-2a744e80e58e) 2025-04-09 10:13:30.577146 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4c16418f-b1ea-409a-91f3-2a744e80e58e) 2025-04-09 10:13:30.580501 | orchestrator | 2025-04-09 10:13:30.963346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:30.963468 | orchestrator | Wednesday 09 April 2025 10:13:30 +0000 (0:00:00.450) 0:00:05.439 ******* 2025-04-09 10:13:30.963520 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-09 10:13:30.963977 | orchestrator | 2025-04-09 10:13:30.964622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:30.965505 | orchestrator | Wednesday 09 April 2025 10:13:30 +0000 (0:00:00.387) 0:00:05.826 ******* 2025-04-09 10:13:31.445456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-09 10:13:31.446255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-09 10:13:31.448254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-09 10:13:31.451857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-09 10:13:31.453010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-09 10:13:31.454122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-09 10:13:31.454881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-09 10:13:31.455648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-09 10:13:31.459448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-09 10:13:31.460097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-09 10:13:31.460945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-09 10:13:31.461614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-09 10:13:31.463942 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-09 10:13:31.464837 | orchestrator | 2025-04-09 10:13:31.465568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:31.466524 | orchestrator | Wednesday 09 April 2025 10:13:31 +0000 (0:00:00.480) 0:00:06.307 ******* 2025-04-09 10:13:31.658399 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:31.660170 | orchestrator | 2025-04-09 10:13:31.660253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:31.872332 | orchestrator | Wednesday 09 April 2025 10:13:31 +0000 (0:00:00.211) 0:00:06.518 ******* 2025-04-09 10:13:31.872414 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:31.873131 | orchestrator | 2025-04-09 10:13:31.875622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:32.082113 | orchestrator | Wednesday 09 April 2025 10:13:31 +0000 (0:00:00.215) 0:00:06.734 ******* 2025-04-09 10:13:32.082269 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:32.082691 | orchestrator | 2025-04-09 10:13:32.083604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:32.086725 | orchestrator | Wednesday 09 April 2025 10:13:32 +0000 (0:00:00.210) 0:00:06.945 ******* 2025-04-09 10:13:32.288157 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:32.289499 | orchestrator | 2025-04-09 10:13:32.290135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:32.290626 | orchestrator | Wednesday 09 April 2025 10:13:32 +0000 (0:00:00.205) 0:00:07.151 ******* 2025-04-09 10:13:32.867455 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:32.867939 | orchestrator | 2025-04-09 10:13:32.870060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:32.871612 | orchestrator | Wednesday 09 April 2025 10:13:32 +0000 (0:00:00.577) 0:00:07.729 ******* 2025-04-09 10:13:33.080751 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:33.082329 | orchestrator | 2025-04-09 10:13:33.083379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:33.084773 | orchestrator | Wednesday 09 April 2025 10:13:33 +0000 (0:00:00.213) 0:00:07.942 ******* 2025-04-09 10:13:33.289958 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:33.291433 | orchestrator | 2025-04-09 10:13:33.292411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:33.292743 | orchestrator | Wednesday 09 April 2025 10:13:33 +0000 (0:00:00.211) 0:00:08.153 ******* 2025-04-09 10:13:33.491978 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:33.492817 | orchestrator | 2025-04-09 10:13:33.494178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:33.495526 | orchestrator | Wednesday 09 April 2025 10:13:33 +0000 (0:00:00.200) 0:00:08.353 ******* 2025-04-09 10:13:34.136651 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-09 10:13:34.137321 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-09 10:13:34.138823 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-09 10:13:34.139417 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-09 10:13:34.140164 | orchestrator | 2025-04-09 10:13:34.140778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:34.141613 | orchestrator | Wednesday 09 April 2025 10:13:34 +0000 (0:00:00.645) 0:00:08.999 ******* 2025-04-09 10:13:34.358811 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:34.362073 | orchestrator | 2025-04-09 10:13:34.362656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:34.362685 | orchestrator | Wednesday 09 April 2025 10:13:34 +0000 (0:00:00.220) 0:00:09.220 ******* 2025-04-09 10:13:34.561036 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:34.561151 | orchestrator | 2025-04-09 10:13:34.562457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:34.565126 | orchestrator | Wednesday 09 April 2025 10:13:34 +0000 (0:00:00.203) 0:00:09.423 ******* 2025-04-09 10:13:34.781114 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:34.782237 | orchestrator | 2025-04-09 10:13:34.782787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:34.783929 | orchestrator | Wednesday 09 April 2025 10:13:34 +0000 (0:00:00.220) 0:00:09.644 ******* 2025-04-09 10:13:34.996688 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:35.001578 | orchestrator | 2025-04-09 10:13:35.001715 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-09 10:13:35.006913 | orchestrator | Wednesday 09 April 2025 10:13:34 +0000 (0:00:00.208) 0:00:09.853 ******* 2025-04-09 10:13:35.136765 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:35.138345 | orchestrator | 2025-04-09 10:13:35.138969 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-09 10:13:35.139810 | orchestrator | Wednesday 09 April 2025 10:13:35 +0000 (0:00:00.146) 0:00:10.000 ******* 2025-04-09 10:13:35.564938 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}}) 2025-04-09 10:13:35.565040 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb3b6432-c1e2-58b0-8349-44fe229d54e8'}}) 2025-04-09 10:13:35.566262 | orchestrator | 2025-04-09 10:13:35.568997 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-09 10:13:35.569146 | orchestrator | Wednesday 09 April 2025 10:13:35 +0000 (0:00:00.426) 0:00:10.426 ******* 2025-04-09 10:13:37.885636 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}) 2025-04-09 10:13:37.886078 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'}) 2025-04-09 10:13:37.887856 | orchestrator | 2025-04-09 10:13:37.888606 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-09 10:13:37.891558 | orchestrator | Wednesday 09 April 2025 10:13:37 +0000 (0:00:02.320) 0:00:12.747 ******* 2025-04-09 10:13:38.059958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:38.061920 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:38.062436 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:38.065394 | orchestrator | 2025-04-09 10:13:38.066275 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-09 10:13:38.066349 | orchestrator | Wednesday 09 April 2025 10:13:38 +0000 (0:00:00.175) 0:00:12.923 ******* 2025-04-09 10:13:39.628285 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}) 2025-04-09 10:13:39.630255 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'}) 2025-04-09 10:13:39.630519 | orchestrator | 2025-04-09 10:13:39.786111 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-09 10:13:39.786139 | orchestrator | Wednesday 09 April 2025 10:13:39 +0000 (0:00:01.564) 0:00:14.487 ******* 2025-04-09 10:13:39.786159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:39.787382 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:39.787916 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:39.788805 | orchestrator | 2025-04-09 10:13:39.789542 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-09 10:13:39.789933 | orchestrator | Wednesday 09 April 2025 10:13:39 +0000 (0:00:00.162) 0:00:14.649 ******* 2025-04-09 10:13:39.942089 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:39.943200 | orchestrator | 2025-04-09 10:13:39.944531 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-09 10:13:39.945097 | orchestrator | Wednesday 09 April 2025 10:13:39 +0000 (0:00:00.155) 0:00:14.805 ******* 2025-04-09 10:13:40.128652 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:40.129566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:40.131020 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:40.132075 | orchestrator | 2025-04-09 10:13:40.133744 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-09 10:13:40.134880 | orchestrator | Wednesday 09 April 2025 10:13:40 +0000 (0:00:00.186) 0:00:14.991 ******* 2025-04-09 10:13:40.269990 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:40.271454 | orchestrator | 2025-04-09 10:13:40.272461 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-09 10:13:40.273390 | orchestrator | Wednesday 09 April 2025 10:13:40 +0000 (0:00:00.141) 0:00:15.133 ******* 2025-04-09 10:13:40.446837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:40.447826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:40.450897 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:40.451703 | orchestrator | 2025-04-09 10:13:40.451728 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-09 10:13:40.451750 | orchestrator | Wednesday 09 April 2025 10:13:40 +0000 (0:00:00.176) 0:00:15.309 ******* 2025-04-09 10:13:40.776037 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:40.776692 | orchestrator | 2025-04-09 10:13:40.777787 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-09 10:13:40.778611 | orchestrator | Wednesday 09 April 2025 10:13:40 +0000 (0:00:00.330) 0:00:15.640 ******* 2025-04-09 10:13:40.950846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:40.951173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:40.953935 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:40.955039 | orchestrator | 2025-04-09 10:13:40.955067 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-09 10:13:40.955544 | orchestrator | Wednesday 09 April 2025 10:13:40 +0000 (0:00:00.172) 0:00:15.812 ******* 2025-04-09 10:13:41.095822 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:41.096337 | orchestrator | 2025-04-09 10:13:41.097408 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-09 10:13:41.098111 | orchestrator | Wednesday 09 April 2025 10:13:41 +0000 (0:00:00.146) 0:00:15.959 ******* 2025-04-09 10:13:41.264753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:41.265231 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:41.266530 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:41.268805 | orchestrator | 2025-04-09 10:13:41.269406 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-09 10:13:41.269435 | orchestrator | Wednesday 09 April 2025 10:13:41 +0000 (0:00:00.168) 0:00:16.127 ******* 2025-04-09 10:13:41.442289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:41.443297 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:41.447225 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:41.447931 | orchestrator | 2025-04-09 10:13:41.447960 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-09 10:13:41.450127 | orchestrator | Wednesday 09 April 2025 10:13:41 +0000 (0:00:00.177) 0:00:16.305 ******* 2025-04-09 10:13:41.632924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:41.633709 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:41.633740 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:41.633763 | orchestrator | 2025-04-09 10:13:41.633859 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-09 10:13:41.634332 | orchestrator | Wednesday 09 April 2025 10:13:41 +0000 (0:00:00.190) 0:00:16.495 ******* 2025-04-09 10:13:41.782817 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:41.783070 | orchestrator | 2025-04-09 10:13:41.783301 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-09 10:13:41.783753 | orchestrator | Wednesday 09 April 2025 10:13:41 +0000 (0:00:00.149) 0:00:16.645 ******* 2025-04-09 10:13:41.918829 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:41.919639 | orchestrator | 2025-04-09 10:13:41.920725 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-09 10:13:41.921897 | orchestrator | Wednesday 09 April 2025 10:13:41 +0000 (0:00:00.136) 0:00:16.782 ******* 2025-04-09 10:13:42.067843 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:42.072113 | orchestrator | 2025-04-09 10:13:42.238553 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-09 10:13:42.238636 | orchestrator | Wednesday 09 April 2025 10:13:42 +0000 (0:00:00.147) 0:00:16.930 ******* 2025-04-09 10:13:42.238662 | orchestrator | ok: [testbed-node-3] => { 2025-04-09 10:13:42.239234 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-09 10:13:42.240690 | orchestrator | } 2025-04-09 10:13:42.242544 | orchestrator | 2025-04-09 10:13:42.243315 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-09 10:13:42.244255 | orchestrator | Wednesday 09 April 2025 10:13:42 +0000 (0:00:00.170) 0:00:17.101 ******* 2025-04-09 10:13:42.396079 | orchestrator | ok: [testbed-node-3] => { 2025-04-09 10:13:42.397192 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-09 10:13:42.398522 | orchestrator | } 2025-04-09 10:13:42.399192 | orchestrator | 2025-04-09 10:13:42.400377 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-09 10:13:42.401293 | orchestrator | Wednesday 09 April 2025 10:13:42 +0000 (0:00:00.158) 0:00:17.259 ******* 2025-04-09 10:13:42.544731 | orchestrator | ok: [testbed-node-3] => { 2025-04-09 10:13:42.545452 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-09 10:13:42.545913 | orchestrator | } 2025-04-09 10:13:42.547407 | orchestrator | 2025-04-09 10:13:42.549178 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-09 10:13:43.463106 | orchestrator | Wednesday 09 April 2025 10:13:42 +0000 (0:00:00.148) 0:00:17.407 ******* 2025-04-09 10:13:43.463285 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:43.463845 | orchestrator | 2025-04-09 10:13:43.463878 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-09 10:13:43.464415 | orchestrator | Wednesday 09 April 2025 10:13:43 +0000 (0:00:00.917) 0:00:18.325 ******* 2025-04-09 10:13:44.014066 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:44.014651 | orchestrator | 2025-04-09 10:13:44.014687 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-09 10:13:44.015393 | orchestrator | Wednesday 09 April 2025 10:13:44 +0000 (0:00:00.550) 0:00:18.876 ******* 2025-04-09 10:13:44.559923 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:44.560461 | orchestrator | 2025-04-09 10:13:44.561550 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-09 10:13:44.564491 | orchestrator | Wednesday 09 April 2025 10:13:44 +0000 (0:00:00.546) 0:00:19.422 ******* 2025-04-09 10:13:44.714404 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:44.714813 | orchestrator | 2025-04-09 10:13:44.714843 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-09 10:13:44.715894 | orchestrator | Wednesday 09 April 2025 10:13:44 +0000 (0:00:00.152) 0:00:19.575 ******* 2025-04-09 10:13:44.828905 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:44.830341 | orchestrator | 2025-04-09 10:13:44.833069 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-09 10:13:44.833334 | orchestrator | Wednesday 09 April 2025 10:13:44 +0000 (0:00:00.117) 0:00:19.692 ******* 2025-04-09 10:13:44.961324 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:44.962336 | orchestrator | 2025-04-09 10:13:44.965814 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-09 10:13:45.117693 | orchestrator | Wednesday 09 April 2025 10:13:44 +0000 (0:00:00.132) 0:00:19.824 ******* 2025-04-09 10:13:45.117789 | orchestrator | ok: [testbed-node-3] => { 2025-04-09 10:13:45.118576 | orchestrator |  "vgs_report": { 2025-04-09 10:13:45.120854 | orchestrator |  "vg": [] 2025-04-09 10:13:45.123776 | orchestrator |  } 2025-04-09 10:13:45.124468 | orchestrator | } 2025-04-09 10:13:45.124493 | orchestrator | 2025-04-09 10:13:45.124513 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-09 10:13:45.125153 | orchestrator | Wednesday 09 April 2025 10:13:45 +0000 (0:00:00.155) 0:00:19.979 ******* 2025-04-09 10:13:45.267679 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:45.268196 | orchestrator | 2025-04-09 10:13:45.269234 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-09 10:13:45.270693 | orchestrator | Wednesday 09 April 2025 10:13:45 +0000 (0:00:00.151) 0:00:20.131 ******* 2025-04-09 10:13:45.415234 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:45.415560 | orchestrator | 2025-04-09 10:13:45.415592 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-09 10:13:45.416689 | orchestrator | Wednesday 09 April 2025 10:13:45 +0000 (0:00:00.146) 0:00:20.277 ******* 2025-04-09 10:13:45.564914 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:45.566099 | orchestrator | 2025-04-09 10:13:45.566914 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-09 10:13:45.567808 | orchestrator | Wednesday 09 April 2025 10:13:45 +0000 (0:00:00.150) 0:00:20.427 ******* 2025-04-09 10:13:45.715077 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:45.715938 | orchestrator | 2025-04-09 10:13:45.719030 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-09 10:13:46.049087 | orchestrator | Wednesday 09 April 2025 10:13:45 +0000 (0:00:00.150) 0:00:20.577 ******* 2025-04-09 10:13:46.049183 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:46.051827 | orchestrator | 2025-04-09 10:13:46.052517 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-09 10:13:46.052538 | orchestrator | Wednesday 09 April 2025 10:13:46 +0000 (0:00:00.333) 0:00:20.910 ******* 2025-04-09 10:13:46.193743 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:46.194150 | orchestrator | 2025-04-09 10:13:46.195032 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-09 10:13:46.196642 | orchestrator | Wednesday 09 April 2025 10:13:46 +0000 (0:00:00.145) 0:00:21.055 ******* 2025-04-09 10:13:46.331972 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:46.332644 | orchestrator | 2025-04-09 10:13:46.333543 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-09 10:13:46.334079 | orchestrator | Wednesday 09 April 2025 10:13:46 +0000 (0:00:00.138) 0:00:21.194 ******* 2025-04-09 10:13:46.471451 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:46.472445 | orchestrator | 2025-04-09 10:13:46.473606 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-09 10:13:46.474605 | orchestrator | Wednesday 09 April 2025 10:13:46 +0000 (0:00:00.140) 0:00:21.335 ******* 2025-04-09 10:13:46.621186 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:46.621845 | orchestrator | 2025-04-09 10:13:46.622417 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-09 10:13:46.623284 | orchestrator | Wednesday 09 April 2025 10:13:46 +0000 (0:00:00.149) 0:00:21.484 ******* 2025-04-09 10:13:46.769043 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:46.769349 | orchestrator | 2025-04-09 10:13:46.770714 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-09 10:13:46.771253 | orchestrator | Wednesday 09 April 2025 10:13:46 +0000 (0:00:00.147) 0:00:21.632 ******* 2025-04-09 10:13:46.914088 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:46.915364 | orchestrator | 2025-04-09 10:13:46.916311 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-09 10:13:46.916985 | orchestrator | Wednesday 09 April 2025 10:13:46 +0000 (0:00:00.145) 0:00:21.777 ******* 2025-04-09 10:13:47.064500 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:47.065249 | orchestrator | 2025-04-09 10:13:47.065285 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-09 10:13:47.066082 | orchestrator | Wednesday 09 April 2025 10:13:47 +0000 (0:00:00.149) 0:00:21.927 ******* 2025-04-09 10:13:47.213313 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:47.214321 | orchestrator | 2025-04-09 10:13:47.214785 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-09 10:13:47.215594 | orchestrator | Wednesday 09 April 2025 10:13:47 +0000 (0:00:00.149) 0:00:22.077 ******* 2025-04-09 10:13:47.367049 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:47.367602 | orchestrator | 2025-04-09 10:13:47.368409 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-09 10:13:47.369046 | orchestrator | Wednesday 09 April 2025 10:13:47 +0000 (0:00:00.151) 0:00:22.228 ******* 2025-04-09 10:13:47.537503 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:47.537626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:47.538183 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:47.539855 | orchestrator | 2025-04-09 10:13:47.540319 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-09 10:13:47.540712 | orchestrator | Wednesday 09 April 2025 10:13:47 +0000 (0:00:00.172) 0:00:22.400 ******* 2025-04-09 10:13:47.703990 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:47.704325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:47.704998 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:47.705884 | orchestrator | 2025-04-09 10:13:47.706229 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-09 10:13:47.706736 | orchestrator | Wednesday 09 April 2025 10:13:47 +0000 (0:00:00.166) 0:00:22.567 ******* 2025-04-09 10:13:48.084181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:48.086789 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:48.088198 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:48.088629 | orchestrator | 2025-04-09 10:13:48.089381 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-09 10:13:48.090119 | orchestrator | Wednesday 09 April 2025 10:13:48 +0000 (0:00:00.380) 0:00:22.947 ******* 2025-04-09 10:13:48.271350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:48.271869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:48.272851 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:48.273563 | orchestrator | 2025-04-09 10:13:48.276818 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-09 10:13:48.476086 | orchestrator | Wednesday 09 April 2025 10:13:48 +0000 (0:00:00.186) 0:00:23.133 ******* 2025-04-09 10:13:48.476139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:48.476825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:48.477353 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:48.478654 | orchestrator | 2025-04-09 10:13:48.479603 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-09 10:13:48.483098 | orchestrator | Wednesday 09 April 2025 10:13:48 +0000 (0:00:00.204) 0:00:23.338 ******* 2025-04-09 10:13:48.657354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:48.657563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:48.658717 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:48.659740 | orchestrator | 2025-04-09 10:13:48.661018 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-09 10:13:48.661930 | orchestrator | Wednesday 09 April 2025 10:13:48 +0000 (0:00:00.180) 0:00:23.519 ******* 2025-04-09 10:13:48.843425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:48.844333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:48.845545 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:48.846646 | orchestrator | 2025-04-09 10:13:48.848333 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-09 10:13:48.851458 | orchestrator | Wednesday 09 April 2025 10:13:48 +0000 (0:00:00.185) 0:00:23.705 ******* 2025-04-09 10:13:49.055165 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:49.055913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:49.056313 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:49.056346 | orchestrator | 2025-04-09 10:13:49.056666 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-09 10:13:49.057397 | orchestrator | Wednesday 09 April 2025 10:13:49 +0000 (0:00:00.212) 0:00:23.918 ******* 2025-04-09 10:13:49.603536 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:49.604287 | orchestrator | 2025-04-09 10:13:49.605002 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-09 10:13:49.605984 | orchestrator | Wednesday 09 April 2025 10:13:49 +0000 (0:00:00.545) 0:00:24.464 ******* 2025-04-09 10:13:50.208615 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:50.209687 | orchestrator | 2025-04-09 10:13:50.210363 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-09 10:13:50.211184 | orchestrator | Wednesday 09 April 2025 10:13:50 +0000 (0:00:00.604) 0:00:25.069 ******* 2025-04-09 10:13:50.380858 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:13:50.382134 | orchestrator | 2025-04-09 10:13:50.383253 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-09 10:13:50.384224 | orchestrator | Wednesday 09 April 2025 10:13:50 +0000 (0:00:00.174) 0:00:25.243 ******* 2025-04-09 10:13:50.577102 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'vg_name': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'}) 2025-04-09 10:13:50.577304 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'vg_name': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'}) 2025-04-09 10:13:50.579594 | orchestrator | 2025-04-09 10:13:50.579660 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-09 10:13:50.580858 | orchestrator | Wednesday 09 April 2025 10:13:50 +0000 (0:00:00.194) 0:00:25.438 ******* 2025-04-09 10:13:50.982988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:50.984232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:50.985746 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:50.988418 | orchestrator | 2025-04-09 10:13:51.219253 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-09 10:13:51.219355 | orchestrator | Wednesday 09 April 2025 10:13:50 +0000 (0:00:00.408) 0:00:25.846 ******* 2025-04-09 10:13:51.219388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:51.223055 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:51.223710 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:51.223735 | orchestrator | 2025-04-09 10:13:51.223750 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-09 10:13:51.223769 | orchestrator | Wednesday 09 April 2025 10:13:51 +0000 (0:00:00.232) 0:00:26.079 ******* 2025-04-09 10:13:51.415753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310', 'data_vg': 'ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310'})  2025-04-09 10:13:51.416815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8', 'data_vg': 'ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8'})  2025-04-09 10:13:51.419970 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:13:51.420797 | orchestrator | 2025-04-09 10:13:51.420822 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-09 10:13:51.421796 | orchestrator | Wednesday 09 April 2025 10:13:51 +0000 (0:00:00.199) 0:00:26.278 ******* 2025-04-09 10:13:52.130318 | orchestrator | ok: [testbed-node-3] => { 2025-04-09 10:13:52.133504 | orchestrator |  "lvm_report": { 2025-04-09 10:13:52.133622 | orchestrator |  "lv": [ 2025-04-09 10:13:52.134801 | orchestrator |  { 2025-04-09 10:13:52.135767 | orchestrator |  "lv_name": "osd-block-0f870d8c-c6a0-5b48-8905-7c7f5ac74310", 2025-04-09 10:13:52.136748 | orchestrator |  "vg_name": "ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310" 2025-04-09 10:13:52.137987 | orchestrator |  }, 2025-04-09 10:13:52.138911 | orchestrator |  { 2025-04-09 10:13:52.139880 | orchestrator |  "lv_name": "osd-block-fb3b6432-c1e2-58b0-8349-44fe229d54e8", 2025-04-09 10:13:52.140442 | orchestrator |  "vg_name": "ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8" 2025-04-09 10:13:52.141162 | orchestrator |  } 2025-04-09 10:13:52.141640 | orchestrator |  ], 2025-04-09 10:13:52.143046 | orchestrator |  "pv": [ 2025-04-09 10:13:52.143154 | orchestrator |  { 2025-04-09 10:13:52.143711 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-09 10:13:52.144513 | orchestrator |  "vg_name": "ceph-0f870d8c-c6a0-5b48-8905-7c7f5ac74310" 2025-04-09 10:13:52.144984 | orchestrator |  }, 2025-04-09 10:13:52.145444 | orchestrator |  { 2025-04-09 10:13:52.145857 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-09 10:13:52.146255 | orchestrator |  "vg_name": "ceph-fb3b6432-c1e2-58b0-8349-44fe229d54e8" 2025-04-09 10:13:52.146864 | orchestrator |  } 2025-04-09 10:13:52.148026 | orchestrator |  ] 2025-04-09 10:13:52.148686 | orchestrator |  } 2025-04-09 10:13:52.149183 | orchestrator | } 2025-04-09 10:13:52.149895 | orchestrator | 2025-04-09 10:13:52.150654 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-09 10:13:52.151453 | orchestrator | 2025-04-09 10:13:52.152300 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-09 10:13:52.152940 | orchestrator | Wednesday 09 April 2025 10:13:52 +0000 (0:00:00.713) 0:00:26.992 ******* 2025-04-09 10:13:52.727747 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-09 10:13:52.729065 | orchestrator | 2025-04-09 10:13:52.730155 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-09 10:13:52.732439 | orchestrator | Wednesday 09 April 2025 10:13:52 +0000 (0:00:00.598) 0:00:27.591 ******* 2025-04-09 10:13:52.981696 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:13:52.982543 | orchestrator | 2025-04-09 10:13:52.984641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:52.984702 | orchestrator | Wednesday 09 April 2025 10:13:52 +0000 (0:00:00.252) 0:00:27.843 ******* 2025-04-09 10:13:53.474528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-09 10:13:53.476915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-09 10:13:53.477623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-09 10:13:53.477672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-09 10:13:53.478381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-09 10:13:53.480096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-09 10:13:53.481048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-09 10:13:53.482284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-09 10:13:53.483954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-09 10:13:53.485347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-09 10:13:53.486155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-09 10:13:53.486519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-09 10:13:53.487610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-09 10:13:53.488598 | orchestrator | 2025-04-09 10:13:53.490274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:53.490512 | orchestrator | Wednesday 09 April 2025 10:13:53 +0000 (0:00:00.492) 0:00:28.335 ******* 2025-04-09 10:13:53.666308 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:53.667399 | orchestrator | 2025-04-09 10:13:53.668526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:53.670107 | orchestrator | Wednesday 09 April 2025 10:13:53 +0000 (0:00:00.193) 0:00:28.528 ******* 2025-04-09 10:13:53.867514 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:53.870375 | orchestrator | 2025-04-09 10:13:53.870411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:54.057614 | orchestrator | Wednesday 09 April 2025 10:13:53 +0000 (0:00:00.200) 0:00:28.729 ******* 2025-04-09 10:13:54.057685 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:54.058000 | orchestrator | 2025-04-09 10:13:54.058071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:54.060595 | orchestrator | Wednesday 09 April 2025 10:13:54 +0000 (0:00:00.190) 0:00:28.919 ******* 2025-04-09 10:13:54.259606 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:54.260332 | orchestrator | 2025-04-09 10:13:54.262946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:54.263302 | orchestrator | Wednesday 09 April 2025 10:13:54 +0000 (0:00:00.201) 0:00:29.121 ******* 2025-04-09 10:13:54.481877 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:54.482293 | orchestrator | 2025-04-09 10:13:54.483494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:54.486386 | orchestrator | Wednesday 09 April 2025 10:13:54 +0000 (0:00:00.222) 0:00:29.344 ******* 2025-04-09 10:13:54.667344 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:54.667946 | orchestrator | 2025-04-09 10:13:54.668746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:54.669376 | orchestrator | Wednesday 09 April 2025 10:13:54 +0000 (0:00:00.186) 0:00:29.530 ******* 2025-04-09 10:13:54.882763 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:54.883250 | orchestrator | 2025-04-09 10:13:54.884601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:54.887700 | orchestrator | Wednesday 09 April 2025 10:13:54 +0000 (0:00:00.215) 0:00:29.746 ******* 2025-04-09 10:13:55.390538 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:55.391589 | orchestrator | 2025-04-09 10:13:55.901407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:55.901529 | orchestrator | Wednesday 09 April 2025 10:13:55 +0000 (0:00:00.503) 0:00:30.249 ******* 2025-04-09 10:13:55.901567 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1316f4e2-7b99-46a1-8513-1c51037dcfb5) 2025-04-09 10:13:55.903544 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1316f4e2-7b99-46a1-8513-1c51037dcfb5) 2025-04-09 10:13:55.905275 | orchestrator | 2025-04-09 10:13:55.906305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:55.907656 | orchestrator | Wednesday 09 April 2025 10:13:55 +0000 (0:00:00.513) 0:00:30.763 ******* 2025-04-09 10:13:56.370694 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a277efc-ca83-41bc-9a13-7ec21996cbcf) 2025-04-09 10:13:56.371725 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a277efc-ca83-41bc-9a13-7ec21996cbcf) 2025-04-09 10:13:56.372362 | orchestrator | 2025-04-09 10:13:56.372754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:56.373634 | orchestrator | Wednesday 09 April 2025 10:13:56 +0000 (0:00:00.466) 0:00:31.230 ******* 2025-04-09 10:13:56.830262 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_20a266e5-e4ad-4a38-8cc4-79e311575ecc) 2025-04-09 10:13:56.830907 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_20a266e5-e4ad-4a38-8cc4-79e311575ecc) 2025-04-09 10:13:56.832381 | orchestrator | 2025-04-09 10:13:56.832972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:57.285307 | orchestrator | Wednesday 09 April 2025 10:13:56 +0000 (0:00:00.462) 0:00:31.692 ******* 2025-04-09 10:13:57.285469 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3189fe9-451f-4fc6-9bec-4c0706cd3177) 2025-04-09 10:13:57.287423 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3189fe9-451f-4fc6-9bec-4c0706cd3177) 2025-04-09 10:13:57.287729 | orchestrator | 2025-04-09 10:13:57.287770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:13:57.288825 | orchestrator | Wednesday 09 April 2025 10:13:57 +0000 (0:00:00.452) 0:00:32.145 ******* 2025-04-09 10:13:57.646295 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-09 10:13:57.647020 | orchestrator | 2025-04-09 10:13:57.647578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:57.648410 | orchestrator | Wednesday 09 April 2025 10:13:57 +0000 (0:00:00.363) 0:00:32.509 ******* 2025-04-09 10:13:58.195108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-09 10:13:58.195513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-09 10:13:58.196876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-09 10:13:58.198529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-09 10:13:58.200852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-09 10:13:58.201987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-09 10:13:58.202895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-09 10:13:58.202938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-09 10:13:58.203754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-09 10:13:58.204459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-09 10:13:58.205148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-09 10:13:58.205581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-09 10:13:58.206295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-09 10:13:58.206773 | orchestrator | 2025-04-09 10:13:58.207293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:58.208075 | orchestrator | Wednesday 09 April 2025 10:13:58 +0000 (0:00:00.547) 0:00:33.057 ******* 2025-04-09 10:13:58.396330 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:58.396819 | orchestrator | 2025-04-09 10:13:58.396862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:58.397701 | orchestrator | Wednesday 09 April 2025 10:13:58 +0000 (0:00:00.197) 0:00:33.254 ******* 2025-04-09 10:13:58.602261 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:58.602836 | orchestrator | 2025-04-09 10:13:58.603885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:58.604761 | orchestrator | Wednesday 09 April 2025 10:13:58 +0000 (0:00:00.207) 0:00:33.461 ******* 2025-04-09 10:13:59.127536 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:59.127680 | orchestrator | 2025-04-09 10:13:59.128687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:59.129475 | orchestrator | Wednesday 09 April 2025 10:13:59 +0000 (0:00:00.526) 0:00:33.988 ******* 2025-04-09 10:13:59.356524 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:59.358365 | orchestrator | 2025-04-09 10:13:59.358424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:59.359007 | orchestrator | Wednesday 09 April 2025 10:13:59 +0000 (0:00:00.229) 0:00:34.217 ******* 2025-04-09 10:13:59.577410 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:59.793130 | orchestrator | 2025-04-09 10:13:59.793181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:59.793192 | orchestrator | Wednesday 09 April 2025 10:13:59 +0000 (0:00:00.219) 0:00:34.436 ******* 2025-04-09 10:13:59.793239 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:13:59.793698 | orchestrator | 2025-04-09 10:13:59.794692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:13:59.795638 | orchestrator | Wednesday 09 April 2025 10:13:59 +0000 (0:00:00.219) 0:00:34.656 ******* 2025-04-09 10:14:00.019483 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:00.020179 | orchestrator | 2025-04-09 10:14:00.021239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:00.022594 | orchestrator | Wednesday 09 April 2025 10:14:00 +0000 (0:00:00.226) 0:00:34.882 ******* 2025-04-09 10:14:00.230702 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:00.233863 | orchestrator | 2025-04-09 10:14:00.233896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:00.897286 | orchestrator | Wednesday 09 April 2025 10:14:00 +0000 (0:00:00.209) 0:00:35.092 ******* 2025-04-09 10:14:00.897403 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-09 10:14:00.897773 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-09 10:14:00.897860 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-09 10:14:00.899231 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-09 10:14:00.899976 | orchestrator | 2025-04-09 10:14:00.900981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:00.901337 | orchestrator | Wednesday 09 April 2025 10:14:00 +0000 (0:00:00.666) 0:00:35.758 ******* 2025-04-09 10:14:01.115806 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:01.116660 | orchestrator | 2025-04-09 10:14:01.118077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:01.118772 | orchestrator | Wednesday 09 April 2025 10:14:01 +0000 (0:00:00.220) 0:00:35.979 ******* 2025-04-09 10:14:01.303452 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:01.305252 | orchestrator | 2025-04-09 10:14:01.305940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:01.307685 | orchestrator | Wednesday 09 April 2025 10:14:01 +0000 (0:00:00.187) 0:00:36.167 ******* 2025-04-09 10:14:01.526783 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:01.528746 | orchestrator | 2025-04-09 10:14:01.530151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:01.530883 | orchestrator | Wednesday 09 April 2025 10:14:01 +0000 (0:00:00.222) 0:00:36.389 ******* 2025-04-09 10:14:02.200343 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:02.201434 | orchestrator | 2025-04-09 10:14:02.202534 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-09 10:14:02.203372 | orchestrator | Wednesday 09 April 2025 10:14:02 +0000 (0:00:00.673) 0:00:37.063 ******* 2025-04-09 10:14:02.339362 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:02.340889 | orchestrator | 2025-04-09 10:14:02.343827 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-09 10:14:02.582390 | orchestrator | Wednesday 09 April 2025 10:14:02 +0000 (0:00:00.139) 0:00:37.202 ******* 2025-04-09 10:14:02.582451 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2af2dad7-be7b-5062-ac12-4fd441a74994'}}) 2025-04-09 10:14:02.583729 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33569830-f4e2-59af-bd76-781c4d067c52'}}) 2025-04-09 10:14:02.583758 | orchestrator | 2025-04-09 10:14:02.583781 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-09 10:14:02.585035 | orchestrator | Wednesday 09 April 2025 10:14:02 +0000 (0:00:00.241) 0:00:37.444 ******* 2025-04-09 10:14:04.765494 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'}) 2025-04-09 10:14:04.766839 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'}) 2025-04-09 10:14:04.769056 | orchestrator | 2025-04-09 10:14:04.770331 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-09 10:14:04.771287 | orchestrator | Wednesday 09 April 2025 10:14:04 +0000 (0:00:02.183) 0:00:39.627 ******* 2025-04-09 10:14:04.947009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:04.947743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:04.948509 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:04.949628 | orchestrator | 2025-04-09 10:14:04.950892 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-09 10:14:06.335785 | orchestrator | Wednesday 09 April 2025 10:14:04 +0000 (0:00:00.182) 0:00:39.809 ******* 2025-04-09 10:14:06.335911 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'}) 2025-04-09 10:14:06.337415 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'}) 2025-04-09 10:14:06.338611 | orchestrator | 2025-04-09 10:14:06.339792 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-09 10:14:06.341315 | orchestrator | Wednesday 09 April 2025 10:14:06 +0000 (0:00:01.387) 0:00:41.197 ******* 2025-04-09 10:14:06.507652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:06.508866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:06.510340 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:06.511388 | orchestrator | 2025-04-09 10:14:06.512472 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-09 10:14:06.513326 | orchestrator | Wednesday 09 April 2025 10:14:06 +0000 (0:00:00.173) 0:00:41.370 ******* 2025-04-09 10:14:06.659395 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:06.659677 | orchestrator | 2025-04-09 10:14:06.660337 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-09 10:14:06.660865 | orchestrator | Wednesday 09 April 2025 10:14:06 +0000 (0:00:00.152) 0:00:41.523 ******* 2025-04-09 10:14:06.853467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:06.853949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:06.854106 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:06.854934 | orchestrator | 2025-04-09 10:14:06.858551 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-09 10:14:06.859885 | orchestrator | Wednesday 09 April 2025 10:14:06 +0000 (0:00:00.192) 0:00:41.715 ******* 2025-04-09 10:14:07.205524 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:07.206906 | orchestrator | 2025-04-09 10:14:07.208101 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-09 10:14:07.208703 | orchestrator | Wednesday 09 April 2025 10:14:07 +0000 (0:00:00.353) 0:00:42.069 ******* 2025-04-09 10:14:07.418351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:07.418494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:07.418711 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:07.418961 | orchestrator | 2025-04-09 10:14:07.419173 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-09 10:14:07.419423 | orchestrator | Wednesday 09 April 2025 10:14:07 +0000 (0:00:00.213) 0:00:42.282 ******* 2025-04-09 10:14:07.568891 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:07.569403 | orchestrator | 2025-04-09 10:14:07.571346 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-09 10:14:07.793080 | orchestrator | Wednesday 09 April 2025 10:14:07 +0000 (0:00:00.149) 0:00:42.432 ******* 2025-04-09 10:14:07.793169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:07.794623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:07.796329 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:07.797594 | orchestrator | 2025-04-09 10:14:07.798601 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-09 10:14:07.799328 | orchestrator | Wednesday 09 April 2025 10:14:07 +0000 (0:00:00.223) 0:00:42.655 ******* 2025-04-09 10:14:07.952082 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:07.953124 | orchestrator | 2025-04-09 10:14:07.955299 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-09 10:14:08.130944 | orchestrator | Wednesday 09 April 2025 10:14:07 +0000 (0:00:00.157) 0:00:42.813 ******* 2025-04-09 10:14:08.131020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:08.131136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:08.132330 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:08.132697 | orchestrator | 2025-04-09 10:14:08.133490 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-09 10:14:08.134532 | orchestrator | Wednesday 09 April 2025 10:14:08 +0000 (0:00:00.179) 0:00:42.993 ******* 2025-04-09 10:14:08.322456 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:08.323131 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:08.323666 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:08.324500 | orchestrator | 2025-04-09 10:14:08.325607 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-09 10:14:08.326102 | orchestrator | Wednesday 09 April 2025 10:14:08 +0000 (0:00:00.192) 0:00:43.185 ******* 2025-04-09 10:14:08.491173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:08.492003 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:08.493467 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:08.493890 | orchestrator | 2025-04-09 10:14:08.495313 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-09 10:14:08.495733 | orchestrator | Wednesday 09 April 2025 10:14:08 +0000 (0:00:00.168) 0:00:43.354 ******* 2025-04-09 10:14:08.633127 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:08.634073 | orchestrator | 2025-04-09 10:14:08.634438 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-09 10:14:08.635262 | orchestrator | Wednesday 09 April 2025 10:14:08 +0000 (0:00:00.141) 0:00:43.496 ******* 2025-04-09 10:14:08.784675 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:08.785753 | orchestrator | 2025-04-09 10:14:08.786831 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-09 10:14:08.787935 | orchestrator | Wednesday 09 April 2025 10:14:08 +0000 (0:00:00.150) 0:00:43.646 ******* 2025-04-09 10:14:08.973142 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:08.974109 | orchestrator | 2025-04-09 10:14:08.975287 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-09 10:14:08.976266 | orchestrator | Wednesday 09 April 2025 10:14:08 +0000 (0:00:00.187) 0:00:43.834 ******* 2025-04-09 10:14:09.142799 | orchestrator | ok: [testbed-node-4] => { 2025-04-09 10:14:09.509688 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-09 10:14:09.509777 | orchestrator | } 2025-04-09 10:14:09.509794 | orchestrator | 2025-04-09 10:14:09.509809 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-09 10:14:09.509823 | orchestrator | Wednesday 09 April 2025 10:14:09 +0000 (0:00:00.168) 0:00:44.003 ******* 2025-04-09 10:14:09.509850 | orchestrator | ok: [testbed-node-4] => { 2025-04-09 10:14:09.510692 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-09 10:14:09.513191 | orchestrator | } 2025-04-09 10:14:09.514147 | orchestrator | 2025-04-09 10:14:09.514180 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-09 10:14:09.514929 | orchestrator | Wednesday 09 April 2025 10:14:09 +0000 (0:00:00.368) 0:00:44.371 ******* 2025-04-09 10:14:09.656831 | orchestrator | ok: [testbed-node-4] => { 2025-04-09 10:14:09.657503 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-09 10:14:09.658088 | orchestrator | } 2025-04-09 10:14:09.658559 | orchestrator | 2025-04-09 10:14:09.658934 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-09 10:14:09.659311 | orchestrator | Wednesday 09 April 2025 10:14:09 +0000 (0:00:00.148) 0:00:44.520 ******* 2025-04-09 10:14:10.211391 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:10.211557 | orchestrator | 2025-04-09 10:14:10.211945 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-09 10:14:10.212393 | orchestrator | Wednesday 09 April 2025 10:14:10 +0000 (0:00:00.552) 0:00:45.072 ******* 2025-04-09 10:14:10.710398 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:10.710562 | orchestrator | 2025-04-09 10:14:10.710925 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-09 10:14:10.711103 | orchestrator | Wednesday 09 April 2025 10:14:10 +0000 (0:00:00.498) 0:00:45.571 ******* 2025-04-09 10:14:11.246999 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:11.247645 | orchestrator | 2025-04-09 10:14:11.247913 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-09 10:14:11.248740 | orchestrator | Wednesday 09 April 2025 10:14:11 +0000 (0:00:00.538) 0:00:46.109 ******* 2025-04-09 10:14:11.406638 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:11.407167 | orchestrator | 2025-04-09 10:14:11.407973 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-09 10:14:11.408352 | orchestrator | Wednesday 09 April 2025 10:14:11 +0000 (0:00:00.161) 0:00:46.270 ******* 2025-04-09 10:14:11.522757 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:11.523425 | orchestrator | 2025-04-09 10:14:11.524632 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-09 10:14:11.525679 | orchestrator | Wednesday 09 April 2025 10:14:11 +0000 (0:00:00.115) 0:00:46.386 ******* 2025-04-09 10:14:11.646952 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:11.649537 | orchestrator | 2025-04-09 10:14:11.651755 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-09 10:14:11.651795 | orchestrator | Wednesday 09 April 2025 10:14:11 +0000 (0:00:00.121) 0:00:46.508 ******* 2025-04-09 10:14:11.795388 | orchestrator | ok: [testbed-node-4] => { 2025-04-09 10:14:11.796515 | orchestrator |  "vgs_report": { 2025-04-09 10:14:11.797197 | orchestrator |  "vg": [] 2025-04-09 10:14:11.798163 | orchestrator |  } 2025-04-09 10:14:11.799328 | orchestrator | } 2025-04-09 10:14:11.802758 | orchestrator | 2025-04-09 10:14:11.803080 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-09 10:14:11.803704 | orchestrator | Wednesday 09 April 2025 10:14:11 +0000 (0:00:00.150) 0:00:46.658 ******* 2025-04-09 10:14:11.969925 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:11.975693 | orchestrator | 2025-04-09 10:14:12.130895 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-09 10:14:12.130937 | orchestrator | Wednesday 09 April 2025 10:14:11 +0000 (0:00:00.172) 0:00:46.831 ******* 2025-04-09 10:14:12.130959 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:12.131312 | orchestrator | 2025-04-09 10:14:12.131883 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-09 10:14:12.132705 | orchestrator | Wednesday 09 April 2025 10:14:12 +0000 (0:00:00.163) 0:00:46.995 ******* 2025-04-09 10:14:12.493160 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:12.494147 | orchestrator | 2025-04-09 10:14:12.495928 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-09 10:14:12.496160 | orchestrator | Wednesday 09 April 2025 10:14:12 +0000 (0:00:00.361) 0:00:47.357 ******* 2025-04-09 10:14:12.671789 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:12.672820 | orchestrator | 2025-04-09 10:14:12.673429 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-09 10:14:12.674422 | orchestrator | Wednesday 09 April 2025 10:14:12 +0000 (0:00:00.177) 0:00:47.534 ******* 2025-04-09 10:14:12.811947 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:12.813055 | orchestrator | 2025-04-09 10:14:12.813666 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-09 10:14:12.814947 | orchestrator | Wednesday 09 April 2025 10:14:12 +0000 (0:00:00.140) 0:00:47.675 ******* 2025-04-09 10:14:12.963344 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:12.964292 | orchestrator | 2025-04-09 10:14:12.964328 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-09 10:14:12.964349 | orchestrator | Wednesday 09 April 2025 10:14:12 +0000 (0:00:00.151) 0:00:47.826 ******* 2025-04-09 10:14:13.129496 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:13.129642 | orchestrator | 2025-04-09 10:14:13.129672 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-09 10:14:13.129778 | orchestrator | Wednesday 09 April 2025 10:14:13 +0000 (0:00:00.166) 0:00:47.993 ******* 2025-04-09 10:14:13.267006 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:13.267127 | orchestrator | 2025-04-09 10:14:13.267152 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-09 10:14:13.267401 | orchestrator | Wednesday 09 April 2025 10:14:13 +0000 (0:00:00.138) 0:00:48.131 ******* 2025-04-09 10:14:13.423055 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:13.423276 | orchestrator | 2025-04-09 10:14:13.423301 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-09 10:14:13.423322 | orchestrator | Wednesday 09 April 2025 10:14:13 +0000 (0:00:00.155) 0:00:48.287 ******* 2025-04-09 10:14:13.561457 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:13.563075 | orchestrator | 2025-04-09 10:14:13.563187 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-09 10:14:13.565324 | orchestrator | Wednesday 09 April 2025 10:14:13 +0000 (0:00:00.137) 0:00:48.424 ******* 2025-04-09 10:14:13.690805 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:13.691733 | orchestrator | 2025-04-09 10:14:13.691771 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-09 10:14:13.693006 | orchestrator | Wednesday 09 April 2025 10:14:13 +0000 (0:00:00.128) 0:00:48.553 ******* 2025-04-09 10:14:13.832578 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:13.832819 | orchestrator | 2025-04-09 10:14:13.833059 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-09 10:14:13.833483 | orchestrator | Wednesday 09 April 2025 10:14:13 +0000 (0:00:00.143) 0:00:48.696 ******* 2025-04-09 10:14:13.983914 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:13.984050 | orchestrator | 2025-04-09 10:14:13.984076 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-09 10:14:13.985123 | orchestrator | Wednesday 09 April 2025 10:14:13 +0000 (0:00:00.151) 0:00:48.848 ******* 2025-04-09 10:14:14.142616 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:14.142896 | orchestrator | 2025-04-09 10:14:14.143104 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-09 10:14:14.144187 | orchestrator | Wednesday 09 April 2025 10:14:14 +0000 (0:00:00.158) 0:00:49.006 ******* 2025-04-09 10:14:14.526080 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:14.527327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:14.527364 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:14.528088 | orchestrator | 2025-04-09 10:14:14.528560 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-09 10:14:14.528844 | orchestrator | Wednesday 09 April 2025 10:14:14 +0000 (0:00:00.383) 0:00:49.390 ******* 2025-04-09 10:14:14.715455 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:14.715876 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:14.716718 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:14.717890 | orchestrator | 2025-04-09 10:14:14.718644 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-09 10:14:14.719634 | orchestrator | Wednesday 09 April 2025 10:14:14 +0000 (0:00:00.188) 0:00:49.578 ******* 2025-04-09 10:14:14.910796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:14.911616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:14.912661 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:14.914957 | orchestrator | 2025-04-09 10:14:14.915889 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-09 10:14:14.917270 | orchestrator | Wednesday 09 April 2025 10:14:14 +0000 (0:00:00.194) 0:00:49.773 ******* 2025-04-09 10:14:15.083940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:15.084957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:15.094126 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:15.094802 | orchestrator | 2025-04-09 10:14:15.096135 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-09 10:14:15.096578 | orchestrator | Wednesday 09 April 2025 10:14:15 +0000 (0:00:00.172) 0:00:49.945 ******* 2025-04-09 10:14:15.258716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:15.259202 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:15.259937 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:15.260482 | orchestrator | 2025-04-09 10:14:15.261121 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-09 10:14:15.261542 | orchestrator | Wednesday 09 April 2025 10:14:15 +0000 (0:00:00.177) 0:00:50.122 ******* 2025-04-09 10:14:15.444957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:15.446145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:15.447013 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:15.448298 | orchestrator | 2025-04-09 10:14:15.449333 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-09 10:14:15.450247 | orchestrator | Wednesday 09 April 2025 10:14:15 +0000 (0:00:00.185) 0:00:50.308 ******* 2025-04-09 10:14:15.645401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:15.646192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:15.647572 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:15.648837 | orchestrator | 2025-04-09 10:14:15.650483 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-09 10:14:15.650723 | orchestrator | Wednesday 09 April 2025 10:14:15 +0000 (0:00:00.200) 0:00:50.508 ******* 2025-04-09 10:14:15.810350 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:15.811237 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:15.811702 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:15.812027 | orchestrator | 2025-04-09 10:14:15.812731 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-09 10:14:15.813188 | orchestrator | Wednesday 09 April 2025 10:14:15 +0000 (0:00:00.166) 0:00:50.674 ******* 2025-04-09 10:14:16.347690 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:16.348189 | orchestrator | 2025-04-09 10:14:16.348273 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-09 10:14:16.348380 | orchestrator | Wednesday 09 April 2025 10:14:16 +0000 (0:00:00.535) 0:00:51.210 ******* 2025-04-09 10:14:16.873415 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:16.873592 | orchestrator | 2025-04-09 10:14:16.874152 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-09 10:14:16.874744 | orchestrator | Wednesday 09 April 2025 10:14:16 +0000 (0:00:00.527) 0:00:51.737 ******* 2025-04-09 10:14:17.040560 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:17.041399 | orchestrator | 2025-04-09 10:14:17.041712 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-09 10:14:17.042142 | orchestrator | Wednesday 09 April 2025 10:14:17 +0000 (0:00:00.167) 0:00:51.904 ******* 2025-04-09 10:14:17.469872 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'vg_name': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'}) 2025-04-09 10:14:17.470556 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'vg_name': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'}) 2025-04-09 10:14:17.471018 | orchestrator | 2025-04-09 10:14:17.471843 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-09 10:14:17.472168 | orchestrator | Wednesday 09 April 2025 10:14:17 +0000 (0:00:00.428) 0:00:52.333 ******* 2025-04-09 10:14:17.648905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:17.649783 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:17.650294 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:17.650921 | orchestrator | 2025-04-09 10:14:17.651734 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-09 10:14:17.651942 | orchestrator | Wednesday 09 April 2025 10:14:17 +0000 (0:00:00.177) 0:00:52.511 ******* 2025-04-09 10:14:17.822180 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:17.823057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:17.823955 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:17.824484 | orchestrator | 2025-04-09 10:14:17.825492 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-09 10:14:17.825990 | orchestrator | Wednesday 09 April 2025 10:14:17 +0000 (0:00:00.173) 0:00:52.684 ******* 2025-04-09 10:14:17.998523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994', 'data_vg': 'ceph-2af2dad7-be7b-5062-ac12-4fd441a74994'})  2025-04-09 10:14:17.999295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-33569830-f4e2-59af-bd76-781c4d067c52', 'data_vg': 'ceph-33569830-f4e2-59af-bd76-781c4d067c52'})  2025-04-09 10:14:17.999804 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:18.000052 | orchestrator | 2025-04-09 10:14:18.001172 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-09 10:14:18.943964 | orchestrator | Wednesday 09 April 2025 10:14:17 +0000 (0:00:00.178) 0:00:52.862 ******* 2025-04-09 10:14:18.944195 | orchestrator | ok: [testbed-node-4] => { 2025-04-09 10:14:18.944321 | orchestrator |  "lvm_report": { 2025-04-09 10:14:18.945790 | orchestrator |  "lv": [ 2025-04-09 10:14:18.948329 | orchestrator |  { 2025-04-09 10:14:18.949072 | orchestrator |  "lv_name": "osd-block-2af2dad7-be7b-5062-ac12-4fd441a74994", 2025-04-09 10:14:18.950256 | orchestrator |  "vg_name": "ceph-2af2dad7-be7b-5062-ac12-4fd441a74994" 2025-04-09 10:14:18.951891 | orchestrator |  }, 2025-04-09 10:14:18.952688 | orchestrator |  { 2025-04-09 10:14:18.953033 | orchestrator |  "lv_name": "osd-block-33569830-f4e2-59af-bd76-781c4d067c52", 2025-04-09 10:14:18.953705 | orchestrator |  "vg_name": "ceph-33569830-f4e2-59af-bd76-781c4d067c52" 2025-04-09 10:14:18.953991 | orchestrator |  } 2025-04-09 10:14:18.955382 | orchestrator |  ], 2025-04-09 10:14:18.955758 | orchestrator |  "pv": [ 2025-04-09 10:14:18.956568 | orchestrator |  { 2025-04-09 10:14:18.957174 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-09 10:14:18.958146 | orchestrator |  "vg_name": "ceph-2af2dad7-be7b-5062-ac12-4fd441a74994" 2025-04-09 10:14:18.958506 | orchestrator |  }, 2025-04-09 10:14:18.959139 | orchestrator |  { 2025-04-09 10:14:18.960249 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-09 10:14:18.960745 | orchestrator |  "vg_name": "ceph-33569830-f4e2-59af-bd76-781c4d067c52" 2025-04-09 10:14:18.961084 | orchestrator |  } 2025-04-09 10:14:18.961258 | orchestrator |  ] 2025-04-09 10:14:18.961850 | orchestrator |  } 2025-04-09 10:14:18.962258 | orchestrator | } 2025-04-09 10:14:18.962741 | orchestrator | 2025-04-09 10:14:18.963167 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-09 10:14:18.963869 | orchestrator | 2025-04-09 10:14:18.964076 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-09 10:14:18.964679 | orchestrator | Wednesday 09 April 2025 10:14:18 +0000 (0:00:00.942) 0:00:53.805 ******* 2025-04-09 10:14:19.212357 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-09 10:14:19.215905 | orchestrator | 2025-04-09 10:14:19.483352 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-09 10:14:19.483446 | orchestrator | Wednesday 09 April 2025 10:14:19 +0000 (0:00:00.268) 0:00:54.073 ******* 2025-04-09 10:14:19.483472 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:19.484289 | orchestrator | 2025-04-09 10:14:19.484953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:19.487278 | orchestrator | Wednesday 09 April 2025 10:14:19 +0000 (0:00:00.271) 0:00:54.345 ******* 2025-04-09 10:14:19.961366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-09 10:14:19.961971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-09 10:14:19.963351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-09 10:14:19.964403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-09 10:14:19.965698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-09 10:14:19.966761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-09 10:14:19.967938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-09 10:14:19.968681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-09 10:14:19.969693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-09 10:14:19.970070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-09 10:14:19.970470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-09 10:14:19.971106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-09 10:14:19.971332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-09 10:14:19.972038 | orchestrator | 2025-04-09 10:14:19.972353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:19.973306 | orchestrator | Wednesday 09 April 2025 10:14:19 +0000 (0:00:00.476) 0:00:54.822 ******* 2025-04-09 10:14:20.181092 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:20.181295 | orchestrator | 2025-04-09 10:14:20.181682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:20.181969 | orchestrator | Wednesday 09 April 2025 10:14:20 +0000 (0:00:00.220) 0:00:55.042 ******* 2025-04-09 10:14:20.399526 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:20.400774 | orchestrator | 2025-04-09 10:14:20.402640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:20.403167 | orchestrator | Wednesday 09 April 2025 10:14:20 +0000 (0:00:00.219) 0:00:55.262 ******* 2025-04-09 10:14:20.620064 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:20.620760 | orchestrator | 2025-04-09 10:14:20.622245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:20.622740 | orchestrator | Wednesday 09 April 2025 10:14:20 +0000 (0:00:00.221) 0:00:55.483 ******* 2025-04-09 10:14:20.845538 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:20.845881 | orchestrator | 2025-04-09 10:14:20.846250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:20.846497 | orchestrator | Wednesday 09 April 2025 10:14:20 +0000 (0:00:00.226) 0:00:55.709 ******* 2025-04-09 10:14:21.067378 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:21.068872 | orchestrator | 2025-04-09 10:14:21.069459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:21.072146 | orchestrator | Wednesday 09 April 2025 10:14:21 +0000 (0:00:00.220) 0:00:55.930 ******* 2025-04-09 10:14:21.258871 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:21.261391 | orchestrator | 2025-04-09 10:14:21.262122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:21.263249 | orchestrator | Wednesday 09 April 2025 10:14:21 +0000 (0:00:00.191) 0:00:56.122 ******* 2025-04-09 10:14:21.756474 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:21.756972 | orchestrator | 2025-04-09 10:14:21.758503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:21.759388 | orchestrator | Wednesday 09 April 2025 10:14:21 +0000 (0:00:00.497) 0:00:56.619 ******* 2025-04-09 10:14:21.959833 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:21.960922 | orchestrator | 2025-04-09 10:14:21.961559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:21.962488 | orchestrator | Wednesday 09 April 2025 10:14:21 +0000 (0:00:00.202) 0:00:56.822 ******* 2025-04-09 10:14:22.426605 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3cfbe416-4a6d-4367-87e7-69d2ca3c8539) 2025-04-09 10:14:22.426830 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3cfbe416-4a6d-4367-87e7-69d2ca3c8539) 2025-04-09 10:14:22.427147 | orchestrator | 2025-04-09 10:14:22.427180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:22.427807 | orchestrator | Wednesday 09 April 2025 10:14:22 +0000 (0:00:00.465) 0:00:57.288 ******* 2025-04-09 10:14:22.906738 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c4741289-30df-4db6-9178-491638aa0447) 2025-04-09 10:14:22.907077 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c4741289-30df-4db6-9178-491638aa0447) 2025-04-09 10:14:22.907110 | orchestrator | 2025-04-09 10:14:22.907137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:22.908096 | orchestrator | Wednesday 09 April 2025 10:14:22 +0000 (0:00:00.478) 0:00:57.767 ******* 2025-04-09 10:14:23.351563 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_80ef2f8b-b45e-4bed-a63c-5dbd52e64749) 2025-04-09 10:14:23.353379 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_80ef2f8b-b45e-4bed-a63c-5dbd52e64749) 2025-04-09 10:14:23.353726 | orchestrator | 2025-04-09 10:14:23.354333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:23.355129 | orchestrator | Wednesday 09 April 2025 10:14:23 +0000 (0:00:00.446) 0:00:58.213 ******* 2025-04-09 10:14:23.824019 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d077077e-8074-4e28-961e-4d10ae0af6bd) 2025-04-09 10:14:23.825000 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d077077e-8074-4e28-961e-4d10ae0af6bd) 2025-04-09 10:14:23.825642 | orchestrator | 2025-04-09 10:14:23.825673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-09 10:14:23.826265 | orchestrator | Wednesday 09 April 2025 10:14:23 +0000 (0:00:00.471) 0:00:58.685 ******* 2025-04-09 10:14:24.169623 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-09 10:14:24.170236 | orchestrator | 2025-04-09 10:14:24.170272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:24.171071 | orchestrator | Wednesday 09 April 2025 10:14:24 +0000 (0:00:00.345) 0:00:59.031 ******* 2025-04-09 10:14:24.674013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-09 10:14:24.675016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-09 10:14:24.676029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-09 10:14:24.676059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-09 10:14:24.676114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-09 10:14:24.676962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-09 10:14:24.677805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-09 10:14:24.678252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-09 10:14:24.678747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-09 10:14:24.681737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-09 10:14:24.899904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-09 10:14:24.899976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-09 10:14:24.899993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-09 10:14:24.900008 | orchestrator | 2025-04-09 10:14:24.900024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:24.900040 | orchestrator | Wednesday 09 April 2025 10:14:24 +0000 (0:00:00.505) 0:00:59.537 ******* 2025-04-09 10:14:24.900066 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:24.900882 | orchestrator | 2025-04-09 10:14:24.901713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:24.902775 | orchestrator | Wednesday 09 April 2025 10:14:24 +0000 (0:00:00.226) 0:00:59.763 ******* 2025-04-09 10:14:25.510420 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:25.511372 | orchestrator | 2025-04-09 10:14:25.512588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:25.513793 | orchestrator | Wednesday 09 April 2025 10:14:25 +0000 (0:00:00.608) 0:01:00.372 ******* 2025-04-09 10:14:25.741069 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:25.741384 | orchestrator | 2025-04-09 10:14:25.741421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:25.742109 | orchestrator | Wednesday 09 April 2025 10:14:25 +0000 (0:00:00.232) 0:01:00.604 ******* 2025-04-09 10:14:26.002611 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:26.003002 | orchestrator | 2025-04-09 10:14:26.003830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:26.004996 | orchestrator | Wednesday 09 April 2025 10:14:25 +0000 (0:00:00.261) 0:01:00.865 ******* 2025-04-09 10:14:26.227166 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:26.227375 | orchestrator | 2025-04-09 10:14:26.227906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:26.228979 | orchestrator | Wednesday 09 April 2025 10:14:26 +0000 (0:00:00.223) 0:01:01.089 ******* 2025-04-09 10:14:26.448286 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:26.448628 | orchestrator | 2025-04-09 10:14:26.448660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:26.449959 | orchestrator | Wednesday 09 April 2025 10:14:26 +0000 (0:00:00.221) 0:01:01.310 ******* 2025-04-09 10:14:26.647734 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:26.648880 | orchestrator | 2025-04-09 10:14:26.649821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:26.650477 | orchestrator | Wednesday 09 April 2025 10:14:26 +0000 (0:00:00.200) 0:01:01.511 ******* 2025-04-09 10:14:26.855187 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:26.856664 | orchestrator | 2025-04-09 10:14:26.856969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:26.858344 | orchestrator | Wednesday 09 April 2025 10:14:26 +0000 (0:00:00.207) 0:01:01.718 ******* 2025-04-09 10:14:27.782337 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-09 10:14:27.783486 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-09 10:14:27.784229 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-09 10:14:27.785112 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-09 10:14:27.785374 | orchestrator | 2025-04-09 10:14:27.785992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:27.786576 | orchestrator | Wednesday 09 April 2025 10:14:27 +0000 (0:00:00.923) 0:01:02.641 ******* 2025-04-09 10:14:28.007859 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:28.008802 | orchestrator | 2025-04-09 10:14:28.009913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:28.010777 | orchestrator | Wednesday 09 April 2025 10:14:28 +0000 (0:00:00.229) 0:01:02.871 ******* 2025-04-09 10:14:28.204088 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:28.204841 | orchestrator | 2025-04-09 10:14:28.204874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:28.204896 | orchestrator | Wednesday 09 April 2025 10:14:28 +0000 (0:00:00.195) 0:01:03.067 ******* 2025-04-09 10:14:28.862477 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:28.863306 | orchestrator | 2025-04-09 10:14:28.864183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-09 10:14:28.867076 | orchestrator | Wednesday 09 April 2025 10:14:28 +0000 (0:00:00.657) 0:01:03.724 ******* 2025-04-09 10:14:29.090110 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:29.090828 | orchestrator | 2025-04-09 10:14:29.091288 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-09 10:14:29.091900 | orchestrator | Wednesday 09 April 2025 10:14:29 +0000 (0:00:00.229) 0:01:03.954 ******* 2025-04-09 10:14:29.258402 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:29.259429 | orchestrator | 2025-04-09 10:14:29.260135 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-09 10:14:29.260790 | orchestrator | Wednesday 09 April 2025 10:14:29 +0000 (0:00:00.167) 0:01:04.121 ******* 2025-04-09 10:14:29.502585 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dc0074f0-3710-5c5e-ae84-22c546993d85'}}) 2025-04-09 10:14:29.503245 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}}) 2025-04-09 10:14:29.504080 | orchestrator | 2025-04-09 10:14:29.505112 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-09 10:14:29.507585 | orchestrator | Wednesday 09 April 2025 10:14:29 +0000 (0:00:00.244) 0:01:04.365 ******* 2025-04-09 10:14:31.582155 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'}) 2025-04-09 10:14:31.582421 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}) 2025-04-09 10:14:31.582985 | orchestrator | 2025-04-09 10:14:31.583725 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-09 10:14:31.585399 | orchestrator | Wednesday 09 April 2025 10:14:31 +0000 (0:00:02.078) 0:01:06.444 ******* 2025-04-09 10:14:31.755080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:31.755931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:31.758656 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:31.758720 | orchestrator | 2025-04-09 10:14:31.758739 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-09 10:14:31.758760 | orchestrator | Wednesday 09 April 2025 10:14:31 +0000 (0:00:00.172) 0:01:06.616 ******* 2025-04-09 10:14:33.051492 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'}) 2025-04-09 10:14:33.052107 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}) 2025-04-09 10:14:33.052720 | orchestrator | 2025-04-09 10:14:33.053350 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-09 10:14:33.054089 | orchestrator | Wednesday 09 April 2025 10:14:33 +0000 (0:00:01.296) 0:01:07.913 ******* 2025-04-09 10:14:33.241981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:33.242317 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:33.243363 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:33.243394 | orchestrator | 2025-04-09 10:14:33.243653 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-09 10:14:33.244184 | orchestrator | Wednesday 09 April 2025 10:14:33 +0000 (0:00:00.192) 0:01:08.105 ******* 2025-04-09 10:14:33.401726 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:33.402713 | orchestrator | 2025-04-09 10:14:33.404116 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-09 10:14:33.406398 | orchestrator | Wednesday 09 April 2025 10:14:33 +0000 (0:00:00.159) 0:01:08.265 ******* 2025-04-09 10:14:33.796243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:33.796418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:33.797478 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:33.799200 | orchestrator | 2025-04-09 10:14:33.800255 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-09 10:14:33.800289 | orchestrator | Wednesday 09 April 2025 10:14:33 +0000 (0:00:00.392) 0:01:08.657 ******* 2025-04-09 10:14:33.948520 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:33.949714 | orchestrator | 2025-04-09 10:14:33.951696 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-09 10:14:34.134793 | orchestrator | Wednesday 09 April 2025 10:14:33 +0000 (0:00:00.152) 0:01:08.809 ******* 2025-04-09 10:14:34.134868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:34.135787 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:34.137160 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:34.138156 | orchestrator | 2025-04-09 10:14:34.139121 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-09 10:14:34.140362 | orchestrator | Wednesday 09 April 2025 10:14:34 +0000 (0:00:00.188) 0:01:08.997 ******* 2025-04-09 10:14:34.288072 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:34.289300 | orchestrator | 2025-04-09 10:14:34.290286 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-09 10:14:34.291426 | orchestrator | Wednesday 09 April 2025 10:14:34 +0000 (0:00:00.152) 0:01:09.150 ******* 2025-04-09 10:14:34.457384 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:34.458563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:34.460147 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:34.460947 | orchestrator | 2025-04-09 10:14:34.462117 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-09 10:14:34.463063 | orchestrator | Wednesday 09 April 2025 10:14:34 +0000 (0:00:00.166) 0:01:09.317 ******* 2025-04-09 10:14:34.605776 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:34.606269 | orchestrator | 2025-04-09 10:14:34.608049 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-09 10:14:34.609085 | orchestrator | Wednesday 09 April 2025 10:14:34 +0000 (0:00:00.151) 0:01:09.469 ******* 2025-04-09 10:14:34.790609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:34.791877 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:34.792891 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:34.794103 | orchestrator | 2025-04-09 10:14:34.795289 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-09 10:14:34.796314 | orchestrator | Wednesday 09 April 2025 10:14:34 +0000 (0:00:00.183) 0:01:09.652 ******* 2025-04-09 10:14:34.964824 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:34.966236 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:34.968477 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:34.969318 | orchestrator | 2025-04-09 10:14:34.969553 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-09 10:14:34.970304 | orchestrator | Wednesday 09 April 2025 10:14:34 +0000 (0:00:00.175) 0:01:09.827 ******* 2025-04-09 10:14:35.157609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:35.158376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:35.160001 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:35.161308 | orchestrator | 2025-04-09 10:14:35.162189 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-09 10:14:35.162620 | orchestrator | Wednesday 09 April 2025 10:14:35 +0000 (0:00:00.192) 0:01:10.020 ******* 2025-04-09 10:14:35.306927 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:35.307657 | orchestrator | 2025-04-09 10:14:35.307941 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-09 10:14:35.308659 | orchestrator | Wednesday 09 April 2025 10:14:35 +0000 (0:00:00.149) 0:01:10.170 ******* 2025-04-09 10:14:35.486912 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:35.488006 | orchestrator | 2025-04-09 10:14:35.488034 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-09 10:14:35.488055 | orchestrator | Wednesday 09 April 2025 10:14:35 +0000 (0:00:00.171) 0:01:10.341 ******* 2025-04-09 10:14:35.638086 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:35.638666 | orchestrator | 2025-04-09 10:14:35.639414 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-09 10:14:35.640600 | orchestrator | Wednesday 09 April 2025 10:14:35 +0000 (0:00:00.160) 0:01:10.501 ******* 2025-04-09 10:14:36.048125 | orchestrator | ok: [testbed-node-5] => { 2025-04-09 10:14:36.049385 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-09 10:14:36.049750 | orchestrator | } 2025-04-09 10:14:36.050321 | orchestrator | 2025-04-09 10:14:36.050753 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-09 10:14:36.050986 | orchestrator | Wednesday 09 April 2025 10:14:36 +0000 (0:00:00.409) 0:01:10.911 ******* 2025-04-09 10:14:36.229410 | orchestrator | ok: [testbed-node-5] => { 2025-04-09 10:14:36.230150 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-09 10:14:36.230180 | orchestrator | } 2025-04-09 10:14:36.230197 | orchestrator | 2025-04-09 10:14:36.230253 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-09 10:14:36.230273 | orchestrator | Wednesday 09 April 2025 10:14:36 +0000 (0:00:00.166) 0:01:11.077 ******* 2025-04-09 10:14:36.395427 | orchestrator | ok: [testbed-node-5] => { 2025-04-09 10:14:36.395981 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-09 10:14:36.398883 | orchestrator | } 2025-04-09 10:14:36.399040 | orchestrator | 2025-04-09 10:14:36.399067 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-09 10:14:36.400262 | orchestrator | Wednesday 09 April 2025 10:14:36 +0000 (0:00:00.179) 0:01:11.257 ******* 2025-04-09 10:14:36.919454 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:36.920232 | orchestrator | 2025-04-09 10:14:36.921204 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-09 10:14:36.923084 | orchestrator | Wednesday 09 April 2025 10:14:36 +0000 (0:00:00.523) 0:01:11.781 ******* 2025-04-09 10:14:37.480262 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:37.483071 | orchestrator | 2025-04-09 10:14:37.483553 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-09 10:14:37.485009 | orchestrator | Wednesday 09 April 2025 10:14:37 +0000 (0:00:00.561) 0:01:12.343 ******* 2025-04-09 10:14:38.001304 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:38.002657 | orchestrator | 2025-04-09 10:14:38.003756 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-09 10:14:38.005058 | orchestrator | Wednesday 09 April 2025 10:14:37 +0000 (0:00:00.519) 0:01:12.862 ******* 2025-04-09 10:14:38.166533 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:38.167149 | orchestrator | 2025-04-09 10:14:38.167876 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-09 10:14:38.168792 | orchestrator | Wednesday 09 April 2025 10:14:38 +0000 (0:00:00.167) 0:01:13.030 ******* 2025-04-09 10:14:38.279354 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:38.279962 | orchestrator | 2025-04-09 10:14:38.280829 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-09 10:14:38.283235 | orchestrator | Wednesday 09 April 2025 10:14:38 +0000 (0:00:00.111) 0:01:13.141 ******* 2025-04-09 10:14:38.410364 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:38.411452 | orchestrator | 2025-04-09 10:14:38.412630 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-09 10:14:38.413761 | orchestrator | Wednesday 09 April 2025 10:14:38 +0000 (0:00:00.131) 0:01:13.272 ******* 2025-04-09 10:14:38.566455 | orchestrator | ok: [testbed-node-5] => { 2025-04-09 10:14:38.567420 | orchestrator |  "vgs_report": { 2025-04-09 10:14:38.569417 | orchestrator |  "vg": [] 2025-04-09 10:14:38.570155 | orchestrator |  } 2025-04-09 10:14:38.571535 | orchestrator | } 2025-04-09 10:14:38.572384 | orchestrator | 2025-04-09 10:14:38.573160 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-09 10:14:38.573901 | orchestrator | Wednesday 09 April 2025 10:14:38 +0000 (0:00:00.156) 0:01:13.429 ******* 2025-04-09 10:14:38.742152 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:38.744119 | orchestrator | 2025-04-09 10:14:38.745091 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-09 10:14:38.749616 | orchestrator | Wednesday 09 April 2025 10:14:38 +0000 (0:00:00.175) 0:01:13.604 ******* 2025-04-09 10:14:39.106317 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:39.107466 | orchestrator | 2025-04-09 10:14:39.107506 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-09 10:14:39.107687 | orchestrator | Wednesday 09 April 2025 10:14:39 +0000 (0:00:00.364) 0:01:13.969 ******* 2025-04-09 10:14:39.251370 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:39.252252 | orchestrator | 2025-04-09 10:14:39.252284 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-09 10:14:39.252591 | orchestrator | Wednesday 09 April 2025 10:14:39 +0000 (0:00:00.145) 0:01:14.115 ******* 2025-04-09 10:14:39.426983 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:39.427087 | orchestrator | 2025-04-09 10:14:39.428318 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-09 10:14:39.429188 | orchestrator | Wednesday 09 April 2025 10:14:39 +0000 (0:00:00.174) 0:01:14.289 ******* 2025-04-09 10:14:39.571396 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:39.571678 | orchestrator | 2025-04-09 10:14:39.571855 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-09 10:14:39.572281 | orchestrator | Wednesday 09 April 2025 10:14:39 +0000 (0:00:00.145) 0:01:14.435 ******* 2025-04-09 10:14:39.728948 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:39.729995 | orchestrator | 2025-04-09 10:14:39.730499 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-09 10:14:39.731771 | orchestrator | Wednesday 09 April 2025 10:14:39 +0000 (0:00:00.155) 0:01:14.590 ******* 2025-04-09 10:14:39.880444 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:39.881838 | orchestrator | 2025-04-09 10:14:39.882419 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-09 10:14:39.883067 | orchestrator | Wednesday 09 April 2025 10:14:39 +0000 (0:00:00.151) 0:01:14.741 ******* 2025-04-09 10:14:40.050830 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:40.051460 | orchestrator | 2025-04-09 10:14:40.052737 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-09 10:14:40.054167 | orchestrator | Wednesday 09 April 2025 10:14:40 +0000 (0:00:00.170) 0:01:14.912 ******* 2025-04-09 10:14:40.215681 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:40.218596 | orchestrator | 2025-04-09 10:14:40.218632 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-09 10:14:40.219450 | orchestrator | Wednesday 09 April 2025 10:14:40 +0000 (0:00:00.163) 0:01:15.076 ******* 2025-04-09 10:14:40.352855 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:40.354083 | orchestrator | 2025-04-09 10:14:40.354124 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-09 10:14:40.355055 | orchestrator | Wednesday 09 April 2025 10:14:40 +0000 (0:00:00.137) 0:01:15.214 ******* 2025-04-09 10:14:40.501205 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:40.502270 | orchestrator | 2025-04-09 10:14:40.503372 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-09 10:14:40.504875 | orchestrator | Wednesday 09 April 2025 10:14:40 +0000 (0:00:00.149) 0:01:15.363 ******* 2025-04-09 10:14:40.656440 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:40.657764 | orchestrator | 2025-04-09 10:14:40.659159 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-09 10:14:40.660026 | orchestrator | Wednesday 09 April 2025 10:14:40 +0000 (0:00:00.155) 0:01:15.519 ******* 2025-04-09 10:14:40.803052 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:40.804284 | orchestrator | 2025-04-09 10:14:40.804326 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-09 10:14:40.805118 | orchestrator | Wednesday 09 April 2025 10:14:40 +0000 (0:00:00.144) 0:01:15.663 ******* 2025-04-09 10:14:41.161859 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:41.162354 | orchestrator | 2025-04-09 10:14:41.163326 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-09 10:14:41.164089 | orchestrator | Wednesday 09 April 2025 10:14:41 +0000 (0:00:00.362) 0:01:16.025 ******* 2025-04-09 10:14:41.343126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:41.343418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:41.344544 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:41.345402 | orchestrator | 2025-04-09 10:14:41.345910 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-09 10:14:41.349077 | orchestrator | Wednesday 09 April 2025 10:14:41 +0000 (0:00:00.180) 0:01:16.205 ******* 2025-04-09 10:14:41.525400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:41.526165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:41.527356 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:41.530557 | orchestrator | 2025-04-09 10:14:41.691288 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-09 10:14:41.691361 | orchestrator | Wednesday 09 April 2025 10:14:41 +0000 (0:00:00.182) 0:01:16.388 ******* 2025-04-09 10:14:41.691386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:41.692378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:41.694296 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:41.695308 | orchestrator | 2025-04-09 10:14:41.696574 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-09 10:14:41.697257 | orchestrator | Wednesday 09 April 2025 10:14:41 +0000 (0:00:00.165) 0:01:16.554 ******* 2025-04-09 10:14:41.885108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:41.886094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:41.886879 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:41.887678 | orchestrator | 2025-04-09 10:14:41.888679 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-09 10:14:41.889190 | orchestrator | Wednesday 09 April 2025 10:14:41 +0000 (0:00:00.192) 0:01:16.746 ******* 2025-04-09 10:14:42.086146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:42.086280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:42.088048 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:42.088287 | orchestrator | 2025-04-09 10:14:42.089344 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-09 10:14:42.090618 | orchestrator | Wednesday 09 April 2025 10:14:42 +0000 (0:00:00.202) 0:01:16.949 ******* 2025-04-09 10:14:42.271514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:42.273716 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:42.275933 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:42.278120 | orchestrator | 2025-04-09 10:14:42.279392 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-09 10:14:42.280637 | orchestrator | Wednesday 09 April 2025 10:14:42 +0000 (0:00:00.181) 0:01:17.131 ******* 2025-04-09 10:14:42.450162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:42.451393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:42.451436 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:42.451597 | orchestrator | 2025-04-09 10:14:42.452287 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-09 10:14:42.452712 | orchestrator | Wednesday 09 April 2025 10:14:42 +0000 (0:00:00.180) 0:01:17.312 ******* 2025-04-09 10:14:42.633369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:42.633569 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:42.634349 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:42.635055 | orchestrator | 2025-04-09 10:14:42.635733 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-09 10:14:42.636302 | orchestrator | Wednesday 09 April 2025 10:14:42 +0000 (0:00:00.185) 0:01:17.497 ******* 2025-04-09 10:14:43.164341 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:43.164516 | orchestrator | 2025-04-09 10:14:43.164643 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-09 10:14:43.165333 | orchestrator | Wednesday 09 April 2025 10:14:43 +0000 (0:00:00.529) 0:01:18.026 ******* 2025-04-09 10:14:43.665351 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:43.666094 | orchestrator | 2025-04-09 10:14:43.667388 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-09 10:14:43.668966 | orchestrator | Wednesday 09 April 2025 10:14:43 +0000 (0:00:00.501) 0:01:18.528 ******* 2025-04-09 10:14:44.041053 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:44.042111 | orchestrator | 2025-04-09 10:14:44.043004 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-09 10:14:44.044173 | orchestrator | Wednesday 09 April 2025 10:14:44 +0000 (0:00:00.374) 0:01:18.902 ******* 2025-04-09 10:14:44.231240 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'vg_name': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'}) 2025-04-09 10:14:44.231743 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'vg_name': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'}) 2025-04-09 10:14:44.233321 | orchestrator | 2025-04-09 10:14:44.234486 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-09 10:14:44.236929 | orchestrator | Wednesday 09 April 2025 10:14:44 +0000 (0:00:00.191) 0:01:19.094 ******* 2025-04-09 10:14:44.413459 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:44.414108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:44.414146 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:44.414838 | orchestrator | 2025-04-09 10:14:44.415362 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-09 10:14:44.416154 | orchestrator | Wednesday 09 April 2025 10:14:44 +0000 (0:00:00.181) 0:01:19.276 ******* 2025-04-09 10:14:44.596298 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:44.596443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:44.597488 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:44.598665 | orchestrator | 2025-04-09 10:14:44.599389 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-09 10:14:44.600327 | orchestrator | Wednesday 09 April 2025 10:14:44 +0000 (0:00:00.181) 0:01:19.458 ******* 2025-04-09 10:14:44.778093 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85', 'data_vg': 'ceph-dc0074f0-3710-5c5e-ae84-22c546993d85'})  2025-04-09 10:14:44.778338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e', 'data_vg': 'ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e'})  2025-04-09 10:14:44.779284 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:44.779706 | orchestrator | 2025-04-09 10:14:44.780504 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-09 10:14:44.780959 | orchestrator | Wednesday 09 April 2025 10:14:44 +0000 (0:00:00.183) 0:01:19.641 ******* 2025-04-09 10:14:45.220185 | orchestrator | ok: [testbed-node-5] => { 2025-04-09 10:14:45.220366 | orchestrator |  "lvm_report": { 2025-04-09 10:14:45.221293 | orchestrator |  "lv": [ 2025-04-09 10:14:45.222292 | orchestrator |  { 2025-04-09 10:14:45.223056 | orchestrator |  "lv_name": "osd-block-07984c01-fdec-5bf7-a01d-ec4b418f7e1e", 2025-04-09 10:14:45.224779 | orchestrator |  "vg_name": "ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e" 2025-04-09 10:14:45.225260 | orchestrator |  }, 2025-04-09 10:14:45.225715 | orchestrator |  { 2025-04-09 10:14:45.226689 | orchestrator |  "lv_name": "osd-block-dc0074f0-3710-5c5e-ae84-22c546993d85", 2025-04-09 10:14:45.227141 | orchestrator |  "vg_name": "ceph-dc0074f0-3710-5c5e-ae84-22c546993d85" 2025-04-09 10:14:45.227695 | orchestrator |  } 2025-04-09 10:14:45.227958 | orchestrator |  ], 2025-04-09 10:14:45.228326 | orchestrator |  "pv": [ 2025-04-09 10:14:45.230004 | orchestrator |  { 2025-04-09 10:14:45.230675 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-09 10:14:45.231625 | orchestrator |  "vg_name": "ceph-dc0074f0-3710-5c5e-ae84-22c546993d85" 2025-04-09 10:14:45.231829 | orchestrator |  }, 2025-04-09 10:14:45.232397 | orchestrator |  { 2025-04-09 10:14:45.233098 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-09 10:14:45.233917 | orchestrator |  "vg_name": "ceph-07984c01-fdec-5bf7-a01d-ec4b418f7e1e" 2025-04-09 10:14:45.234299 | orchestrator |  } 2025-04-09 10:14:45.234788 | orchestrator |  ] 2025-04-09 10:14:45.235113 | orchestrator |  } 2025-04-09 10:14:45.235753 | orchestrator | } 2025-04-09 10:14:45.236367 | orchestrator | 2025-04-09 10:14:45.236885 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:14:45.237243 | orchestrator | 2025-04-09 10:14:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 10:14:45.237359 | orchestrator | 2025-04-09 10:14:45 | INFO  | Please wait and do not abort execution. 2025-04-09 10:14:45.239060 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-09 10:14:45.239320 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-09 10:14:45.239670 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-09 10:14:45.240284 | orchestrator | 2025-04-09 10:14:45.240909 | orchestrator | 2025-04-09 10:14:45.241246 | orchestrator | 2025-04-09 10:14:45.241589 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:14:45.241907 | orchestrator | Wednesday 09 April 2025 10:14:45 +0000 (0:00:00.439) 0:01:20.081 ******* 2025-04-09 10:14:45.242240 | orchestrator | =============================================================================== 2025-04-09 10:14:45.242592 | orchestrator | Create block VGs -------------------------------------------------------- 6.58s 2025-04-09 10:14:45.243109 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2025-04-09 10:14:45.243538 | orchestrator | Print LVM report data --------------------------------------------------- 2.10s 2025-04-09 10:14:45.244110 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.99s 2025-04-09 10:14:45.244822 | orchestrator | Add known links to the list of available block devices ------------------ 1.70s 2025-04-09 10:14:45.245081 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2025-04-09 10:14:45.245775 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2025-04-09 10:14:45.247567 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2025-04-09 10:14:45.249070 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.60s 2025-04-09 10:14:45.249938 | orchestrator | Add known partitions to the list of available block devices ------------- 1.53s 2025-04-09 10:14:45.251266 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.11s 2025-04-09 10:14:45.252459 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2025-04-09 10:14:45.253642 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.91s 2025-04-09 10:14:45.254372 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-04-09 10:14:45.255003 | orchestrator | Create list of VG/LV names ---------------------------------------------- 0.82s 2025-04-09 10:14:45.255613 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2025-04-09 10:14:45.256175 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.77s 2025-04-09 10:14:45.257095 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.77s 2025-04-09 10:14:45.257792 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.75s 2025-04-09 10:14:45.258124 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.74s 2025-04-09 10:14:47.470540 | orchestrator | 2025-04-09 10:14:47 | INFO  | Task 78a157a6-cc5c-4e04-97cc-37a36d090feb (facts) was prepared for execution. 2025-04-09 10:14:51.618699 | orchestrator | 2025-04-09 10:14:47 | INFO  | It takes a moment until task 78a157a6-cc5c-4e04-97cc-37a36d090feb (facts) has been started and output is visible here. 2025-04-09 10:14:51.618836 | orchestrator | 2025-04-09 10:14:51.620043 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-09 10:14:51.620072 | orchestrator | 2025-04-09 10:14:51.620094 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-09 10:14:51.621083 | orchestrator | Wednesday 09 April 2025 10:14:51 +0000 (0:00:00.288) 0:00:00.288 ******* 2025-04-09 10:14:53.110446 | orchestrator | ok: [testbed-manager] 2025-04-09 10:14:53.113014 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:14:53.114579 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:14:53.114610 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:53.115485 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:14:53.115511 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:53.116685 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:14:53.117872 | orchestrator | 2025-04-09 10:14:53.118617 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-09 10:14:53.119338 | orchestrator | Wednesday 09 April 2025 10:14:53 +0000 (0:00:01.493) 0:00:01.782 ******* 2025-04-09 10:14:53.271656 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:14:53.351009 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:14:53.430596 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:14:53.506926 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:14:53.583829 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:14:54.331840 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:14:54.334831 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:14:54.336421 | orchestrator | 2025-04-09 10:14:54.337055 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-09 10:14:54.339016 | orchestrator | 2025-04-09 10:14:54.339330 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-09 10:14:54.340830 | orchestrator | Wednesday 09 April 2025 10:14:54 +0000 (0:00:01.224) 0:00:03.006 ******* 2025-04-09 10:14:59.518410 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:14:59.519089 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:14:59.520293 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:14:59.522815 | orchestrator | ok: [testbed-manager] 2025-04-09 10:14:59.523602 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:14:59.523632 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:14:59.523650 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:14:59.523672 | orchestrator | 2025-04-09 10:14:59.524177 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-09 10:14:59.524843 | orchestrator | 2025-04-09 10:14:59.525511 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-09 10:14:59.526154 | orchestrator | Wednesday 09 April 2025 10:14:59 +0000 (0:00:05.187) 0:00:08.194 ******* 2025-04-09 10:14:59.679664 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:14:59.756205 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:14:59.831983 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:14:59.914535 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:14:59.993754 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:15:00.043683 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:15:00.044327 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:15:00.044858 | orchestrator | 2025-04-09 10:15:00.046270 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:15:00.046864 | orchestrator | 2025-04-09 10:15:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-09 10:15:00.047014 | orchestrator | 2025-04-09 10:15:00 | INFO  | Please wait and do not abort execution. 2025-04-09 10:15:00.048234 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:15:00.049110 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:15:00.049556 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:15:00.050300 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:15:00.051104 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:15:00.051637 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:15:00.052542 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:15:00.052808 | orchestrator | 2025-04-09 10:15:00.053641 | orchestrator | 2025-04-09 10:15:00.054782 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:15:00.055071 | orchestrator | Wednesday 09 April 2025 10:15:00 +0000 (0:00:00.526) 0:00:08.721 ******* 2025-04-09 10:15:00.055802 | orchestrator | =============================================================================== 2025-04-09 10:15:00.056328 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.19s 2025-04-09 10:15:00.056971 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.49s 2025-04-09 10:15:00.057698 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-04-09 10:15:00.058069 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-04-09 10:15:00.660740 | orchestrator | 2025-04-09 10:15:00.663607 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Apr 9 10:15:00 UTC 2025 2025-04-09 10:15:02.281123 | orchestrator | 2025-04-09 10:15:02.281307 | orchestrator | 2025-04-09 10:15:02 | INFO  | Collection nutshell is prepared for execution 2025-04-09 10:15:02.281399 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [0] - dotfiles 2025-04-09 10:15:02.287087 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [0] - homer 2025-04-09 10:15:02.287156 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [0] - netdata 2025-04-09 10:15:02.287178 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [0] - openstackclient 2025-04-09 10:15:02.287314 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [0] - phpmyadmin 2025-04-09 10:15:02.287585 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [0] - common 2025-04-09 10:15:02.289787 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [1] -- loadbalancer 2025-04-09 10:15:02.290203 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [2] --- opensearch 2025-04-09 10:15:02.290368 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [2] --- mariadb-ng 2025-04-09 10:15:02.290392 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [3] ---- horizon 2025-04-09 10:15:02.290412 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [3] ---- keystone 2025-04-09 10:15:02.290473 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [4] ----- neutron 2025-04-09 10:15:02.290494 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [5] ------ wait-for-nova 2025-04-09 10:15:02.290681 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [5] ------ octavia 2025-04-09 10:15:02.292335 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [4] ----- barbican 2025-04-09 10:15:02.292569 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [4] ----- designate 2025-04-09 10:15:02.292616 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [4] ----- ironic 2025-04-09 10:15:02.292632 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [4] ----- placement 2025-04-09 10:15:02.292666 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [4] ----- magnum 2025-04-09 10:15:02.292681 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [1] -- openvswitch 2025-04-09 10:15:02.292695 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [2] --- ovn 2025-04-09 10:15:02.292718 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [1] -- memcached 2025-04-09 10:15:02.292807 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [1] -- redis 2025-04-09 10:15:02.292828 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [1] -- rabbitmq-ng 2025-04-09 10:15:02.292848 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [0] - kubernetes 2025-04-09 10:15:02.294264 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [1] -- kubeconfig 2025-04-09 10:15:02.294381 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [1] -- copy-kubeconfig 2025-04-09 10:15:02.294409 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [0] - ceph 2025-04-09 10:15:02.295806 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [1] -- ceph-pools 2025-04-09 10:15:02.295937 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [2] --- copy-ceph-keys 2025-04-09 10:15:02.296081 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [3] ---- cephclient 2025-04-09 10:15:02.296110 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-09 10:15:02.296479 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [4] ----- wait-for-keystone 2025-04-09 10:15:02.296511 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-09 10:15:02.296818 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [5] ------ glance 2025-04-09 10:15:02.296843 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [5] ------ cinder 2025-04-09 10:15:02.296859 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [5] ------ nova 2025-04-09 10:15:02.296879 | orchestrator | 2025-04-09 10:15:02 | INFO  | A [4] ----- prometheus 2025-04-09 10:15:02.516129 | orchestrator | 2025-04-09 10:15:02 | INFO  | D [5] ------ grafana 2025-04-09 10:15:02.516267 | orchestrator | 2025-04-09 10:15:02 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-09 10:15:05.025073 | orchestrator | 2025-04-09 10:15:02 | INFO  | Tasks are running in the background 2025-04-09 10:15:05.025237 | orchestrator | 2025-04-09 10:15:05 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-09 10:15:07.213309 | orchestrator | 2025-04-09 10:15:07 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:07.217410 | orchestrator | 2025-04-09 10:15:07 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:07.220522 | orchestrator | 2025-04-09 10:15:07 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:07.221003 | orchestrator | 2025-04-09 10:15:07 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:07.223020 | orchestrator | 2025-04-09 10:15:07 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:07.226401 | orchestrator | 2025-04-09 10:15:07 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:07.228884 | orchestrator | 2025-04-09 10:15:07 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:07.228949 | orchestrator | 2025-04-09 10:15:07 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:10.295827 | orchestrator | 2025-04-09 10:15:10 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:10.298888 | orchestrator | 2025-04-09 10:15:10 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:10.298920 | orchestrator | 2025-04-09 10:15:10 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:10.303742 | orchestrator | 2025-04-09 10:15:10 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:13.374436 | orchestrator | 2025-04-09 10:15:10 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:13.374559 | orchestrator | 2025-04-09 10:15:10 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:13.374579 | orchestrator | 2025-04-09 10:15:10 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:13.374596 | orchestrator | 2025-04-09 10:15:10 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:13.374631 | orchestrator | 2025-04-09 10:15:13 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:16.410008 | orchestrator | 2025-04-09 10:15:13 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:16.410185 | orchestrator | 2025-04-09 10:15:13 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:16.410205 | orchestrator | 2025-04-09 10:15:13 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:16.410270 | orchestrator | 2025-04-09 10:15:13 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:16.410285 | orchestrator | 2025-04-09 10:15:13 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:16.410316 | orchestrator | 2025-04-09 10:15:13 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:16.410332 | orchestrator | 2025-04-09 10:15:13 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:16.410364 | orchestrator | 2025-04-09 10:15:16 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:16.412977 | orchestrator | 2025-04-09 10:15:16 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:16.414520 | orchestrator | 2025-04-09 10:15:16 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:16.417359 | orchestrator | 2025-04-09 10:15:16 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:16.418345 | orchestrator | 2025-04-09 10:15:16 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:16.428563 | orchestrator | 2025-04-09 10:15:16 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:16.432795 | orchestrator | 2025-04-09 10:15:16 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:19.518979 | orchestrator | 2025-04-09 10:15:16 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:19.519102 | orchestrator | 2025-04-09 10:15:19 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:19.520700 | orchestrator | 2025-04-09 10:15:19 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:19.520917 | orchestrator | 2025-04-09 10:15:19 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:19.521495 | orchestrator | 2025-04-09 10:15:19 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:19.521913 | orchestrator | 2025-04-09 10:15:19 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:19.523085 | orchestrator | 2025-04-09 10:15:19 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:19.525950 | orchestrator | 2025-04-09 10:15:19 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:22.598278 | orchestrator | 2025-04-09 10:15:19 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:22.598424 | orchestrator | 2025-04-09 10:15:22 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:22.598557 | orchestrator | 2025-04-09 10:15:22 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:22.598592 | orchestrator | 2025-04-09 10:15:22 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:22.602395 | orchestrator | 2025-04-09 10:15:22 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:22.607937 | orchestrator | 2025-04-09 10:15:22 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:25.654557 | orchestrator | 2025-04-09 10:15:22 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:25.654670 | orchestrator | 2025-04-09 10:15:22 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:25.654690 | orchestrator | 2025-04-09 10:15:22 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:25.654723 | orchestrator | 2025-04-09 10:15:25 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:25.659689 | orchestrator | 2025-04-09 10:15:25 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:25.659726 | orchestrator | 2025-04-09 10:15:25 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:25.661574 | orchestrator | 2025-04-09 10:15:25 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:25.662363 | orchestrator | 2025-04-09 10:15:25 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:25.662397 | orchestrator | 2025-04-09 10:15:25 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:25.669423 | orchestrator | 2025-04-09 10:15:25 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:28.746961 | orchestrator | 2025-04-09 10:15:25 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:28.747091 | orchestrator | 2025-04-09 10:15:28 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:28.748312 | orchestrator | 2025-04-09 10:15:28 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:28.750202 | orchestrator | 2025-04-09 10:15:28 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:28.750243 | orchestrator | 2025-04-09 10:15:28 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:28.755787 | orchestrator | 2025-04-09 10:15:28 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:28.758118 | orchestrator | 2025-04-09 10:15:28 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:28.758274 | orchestrator | 2025-04-09 10:15:28 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state STARTED 2025-04-09 10:15:31.823735 | orchestrator | 2025-04-09 10:15:28 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:31.823871 | orchestrator | 2025-04-09 10:15:31 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:31.826400 | orchestrator | 2025-04-09 10:15:31 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:31.826430 | orchestrator | 2025-04-09 10:15:31 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:31.826444 | orchestrator | 2025-04-09 10:15:31 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:31.826465 | orchestrator | 2025-04-09 10:15:31 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:31.835376 | orchestrator | 2025-04-09 10:15:31 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:31.836353 | orchestrator | 2025-04-09 10:15:31.836388 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-09 10:15:31.836403 | orchestrator | 2025-04-09 10:15:31.836418 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-09 10:15:31.836439 | orchestrator | Wednesday 09 April 2025 10:15:14 +0000 (0:00:00.562) 0:00:00.562 ******* 2025-04-09 10:15:31.836454 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:15:31.836470 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:15:31.836484 | orchestrator | changed: [testbed-manager] 2025-04-09 10:15:31.836498 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:15:31.836512 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:15:31.836526 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:15:31.836540 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:15:31.836554 | orchestrator | 2025-04-09 10:15:31.836568 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-09 10:15:31.836582 | orchestrator | Wednesday 09 April 2025 10:15:18 +0000 (0:00:04.734) 0:00:05.297 ******* 2025-04-09 10:15:31.836597 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-09 10:15:31.836612 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-09 10:15:31.836632 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-09 10:15:31.836646 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-09 10:15:31.836660 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-09 10:15:31.836674 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-09 10:15:31.836688 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-09 10:15:31.836702 | orchestrator | 2025-04-09 10:15:31.836716 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-09 10:15:31.836730 | orchestrator | Wednesday 09 April 2025 10:15:20 +0000 (0:00:01.946) 0:00:07.244 ******* 2025-04-09 10:15:31.836748 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-09 10:15:19.790485', 'end': '2025-04-09 10:15:19.796893', 'delta': '0:00:00.006408', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-09 10:15:31.836793 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-09 10:15:19.837812', 'end': '2025-04-09 10:15:19.846762', 'delta': '0:00:00.008950', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-09 10:15:31.836810 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-09 10:15:19.752939', 'end': '2025-04-09 10:15:19.761502', 'delta': '0:00:00.008563', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-09 10:15:31.836845 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-09 10:15:19.805409', 'end': '2025-04-09 10:15:19.814487', 'delta': '0:00:00.009078', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-09 10:15:31.836862 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-09 10:15:19.863442', 'end': '2025-04-09 10:15:19.872905', 'delta': '0:00:00.009463', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-09 10:15:31.836876 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-09 10:15:20.160767', 'end': '2025-04-09 10:15:20.168878', 'delta': '0:00:00.008111', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-09 10:15:31.836903 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-09 10:15:20.343967', 'end': '2025-04-09 10:15:20.352561', 'delta': '0:00:00.008594', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-09 10:15:31.836918 | orchestrator | 2025-04-09 10:15:31.836933 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-04-09 10:15:31.836947 | orchestrator | Wednesday 09 April 2025 10:15:23 +0000 (0:00:02.940) 0:00:10.184 ******* 2025-04-09 10:15:31.836961 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-09 10:15:31.836975 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-09 10:15:31.836989 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-09 10:15:31.837003 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-09 10:15:31.837018 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-09 10:15:31.837032 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-09 10:15:31.837046 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-09 10:15:31.837059 | orchestrator | 2025-04-09 10:15:31.837074 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-09 10:15:31.837088 | orchestrator | Wednesday 09 April 2025 10:15:25 +0000 (0:00:02.126) 0:00:12.310 ******* 2025-04-09 10:15:31.837102 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-09 10:15:31.837116 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-09 10:15:31.837130 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-09 10:15:31.837144 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-09 10:15:31.837158 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-09 10:15:31.837172 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-09 10:15:31.837186 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-09 10:15:31.837200 | orchestrator | 2025-04-09 10:15:31.837239 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:15:31.837262 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:15:31.837298 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:15:31.837314 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:15:31.837328 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:15:31.837357 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:15:31.837374 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:15:31.837390 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:15:31.837405 | orchestrator | 2025-04-09 10:15:31.837421 | orchestrator | 2025-04-09 10:15:31.837436 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:15:31.837452 | orchestrator | Wednesday 09 April 2025 10:15:30 +0000 (0:00:04.788) 0:00:17.098 ******* 2025-04-09 10:15:31.837467 | orchestrator | =============================================================================== 2025-04-09 10:15:31.837483 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.79s 2025-04-09 10:15:31.837498 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.73s 2025-04-09 10:15:31.837514 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.94s 2025-04-09 10:15:31.837529 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.13s 2025-04-09 10:15:31.837545 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.95s 2025-04-09 10:15:31.837565 | orchestrator | 2025-04-09 10:15:31 | INFO  | Task 0c151790-7d13-482c-834c-ede39b27d746 is in state SUCCESS 2025-04-09 10:15:34.918390 | orchestrator | 2025-04-09 10:15:31 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:34.918528 | orchestrator | 2025-04-09 10:15:34 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:34.919460 | orchestrator | 2025-04-09 10:15:34 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:34.919488 | orchestrator | 2025-04-09 10:15:34 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:34.919508 | orchestrator | 2025-04-09 10:15:34 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:34.921728 | orchestrator | 2025-04-09 10:15:34 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:34.923778 | orchestrator | 2025-04-09 10:15:34 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:34.924589 | orchestrator | 2025-04-09 10:15:34 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:38.005604 | orchestrator | 2025-04-09 10:15:34 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:38.005740 | orchestrator | 2025-04-09 10:15:37 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:38.009956 | orchestrator | 2025-04-09 10:15:38 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:38.009989 | orchestrator | 2025-04-09 10:15:38 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:38.010013 | orchestrator | 2025-04-09 10:15:38 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:38.013549 | orchestrator | 2025-04-09 10:15:38 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:38.017824 | orchestrator | 2025-04-09 10:15:38 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:38.017859 | orchestrator | 2025-04-09 10:15:38 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:41.078353 | orchestrator | 2025-04-09 10:15:38 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:41.078509 | orchestrator | 2025-04-09 10:15:41 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:41.086651 | orchestrator | 2025-04-09 10:15:41 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:41.095797 | orchestrator | 2025-04-09 10:15:41 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:41.102201 | orchestrator | 2025-04-09 10:15:41 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:41.102265 | orchestrator | 2025-04-09 10:15:41 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:41.110355 | orchestrator | 2025-04-09 10:15:41 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:41.112076 | orchestrator | 2025-04-09 10:15:41 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:44.215295 | orchestrator | 2025-04-09 10:15:41 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:44.215428 | orchestrator | 2025-04-09 10:15:44 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:44.221280 | orchestrator | 2025-04-09 10:15:44 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:44.229392 | orchestrator | 2025-04-09 10:15:44 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:44.232946 | orchestrator | 2025-04-09 10:15:44 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:44.238930 | orchestrator | 2025-04-09 10:15:44 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:44.240249 | orchestrator | 2025-04-09 10:15:44 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:44.249884 | orchestrator | 2025-04-09 10:15:44 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:47.333353 | orchestrator | 2025-04-09 10:15:44 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:47.333501 | orchestrator | 2025-04-09 10:15:47 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:47.339451 | orchestrator | 2025-04-09 10:15:47 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:47.345813 | orchestrator | 2025-04-09 10:15:47 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:47.349861 | orchestrator | 2025-04-09 10:15:47 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:47.354001 | orchestrator | 2025-04-09 10:15:47 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:47.358633 | orchestrator | 2025-04-09 10:15:47 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:47.362916 | orchestrator | 2025-04-09 10:15:47 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:50.426412 | orchestrator | 2025-04-09 10:15:47 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:50.426577 | orchestrator | 2025-04-09 10:15:50 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:50.433055 | orchestrator | 2025-04-09 10:15:50 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:50.437489 | orchestrator | 2025-04-09 10:15:50 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state STARTED 2025-04-09 10:15:50.437543 | orchestrator | 2025-04-09 10:15:50 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:50.439979 | orchestrator | 2025-04-09 10:15:50 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:50.443132 | orchestrator | 2025-04-09 10:15:50 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:50.447152 | orchestrator | 2025-04-09 10:15:50 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:53.503332 | orchestrator | 2025-04-09 10:15:50 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:53.503464 | orchestrator | 2025-04-09 10:15:53 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:53.506498 | orchestrator | 2025-04-09 10:15:53 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:53.507442 | orchestrator | 2025-04-09 10:15:53 | INFO  | Task 86aebb84-efee-4a42-8954-de6c224d67d9 is in state SUCCESS 2025-04-09 10:15:53.509254 | orchestrator | 2025-04-09 10:15:53 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:53.511810 | orchestrator | 2025-04-09 10:15:53 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:53.513749 | orchestrator | 2025-04-09 10:15:53 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:53.517811 | orchestrator | 2025-04-09 10:15:53 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:53.517891 | orchestrator | 2025-04-09 10:15:53 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:56.574370 | orchestrator | 2025-04-09 10:15:56 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:56.576915 | orchestrator | 2025-04-09 10:15:56 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:56.581465 | orchestrator | 2025-04-09 10:15:56 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:56.587301 | orchestrator | 2025-04-09 10:15:56 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:56.588774 | orchestrator | 2025-04-09 10:15:56 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:56.592431 | orchestrator | 2025-04-09 10:15:56 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:15:59.666833 | orchestrator | 2025-04-09 10:15:56 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:15:59.666984 | orchestrator | 2025-04-09 10:15:59 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:15:59.668192 | orchestrator | 2025-04-09 10:15:59 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:15:59.668258 | orchestrator | 2025-04-09 10:15:59 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:15:59.670240 | orchestrator | 2025-04-09 10:15:59 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:15:59.671315 | orchestrator | 2025-04-09 10:15:59 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:15:59.674799 | orchestrator | 2025-04-09 10:15:59 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:02.738182 | orchestrator | 2025-04-09 10:15:59 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:02.738358 | orchestrator | 2025-04-09 10:16:02 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:02.743173 | orchestrator | 2025-04-09 10:16:02 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:02.750185 | orchestrator | 2025-04-09 10:16:02 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:02.752924 | orchestrator | 2025-04-09 10:16:02 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:02.752957 | orchestrator | 2025-04-09 10:16:02 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:16:02.755686 | orchestrator | 2025-04-09 10:16:02 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:05.823102 | orchestrator | 2025-04-09 10:16:02 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:05.823290 | orchestrator | 2025-04-09 10:16:05 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:05.823993 | orchestrator | 2025-04-09 10:16:05 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:05.824027 | orchestrator | 2025-04-09 10:16:05 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:05.828352 | orchestrator | 2025-04-09 10:16:05 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:05.829439 | orchestrator | 2025-04-09 10:16:05 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:16:05.835782 | orchestrator | 2025-04-09 10:16:05 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:08.890080 | orchestrator | 2025-04-09 10:16:05 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:08.890279 | orchestrator | 2025-04-09 10:16:08 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:08.892654 | orchestrator | 2025-04-09 10:16:08 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:08.892686 | orchestrator | 2025-04-09 10:16:08 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:08.892710 | orchestrator | 2025-04-09 10:16:08 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:08.893352 | orchestrator | 2025-04-09 10:16:08 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state STARTED 2025-04-09 10:16:08.893385 | orchestrator | 2025-04-09 10:16:08 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:11.956300 | orchestrator | 2025-04-09 10:16:08 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:11.956440 | orchestrator | 2025-04-09 10:16:11 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:11.961254 | orchestrator | 2025-04-09 10:16:11 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:11.968019 | orchestrator | 2025-04-09 10:16:11 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:11.974784 | orchestrator | 2025-04-09 10:16:11 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:11.976717 | orchestrator | 2025-04-09 10:16:11 | INFO  | Task 38a0fae9-068c-4806-8843-c39b65e87132 is in state SUCCESS 2025-04-09 10:16:11.976764 | orchestrator | 2025-04-09 10:16:11 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:15.050365 | orchestrator | 2025-04-09 10:16:11 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:15.050532 | orchestrator | 2025-04-09 10:16:15 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:15.051438 | orchestrator | 2025-04-09 10:16:15 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:15.057142 | orchestrator | 2025-04-09 10:16:15 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:15.059293 | orchestrator | 2025-04-09 10:16:15 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:15.065715 | orchestrator | 2025-04-09 10:16:15 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:18.126003 | orchestrator | 2025-04-09 10:16:15 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:18.126180 | orchestrator | 2025-04-09 10:16:18 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:18.127013 | orchestrator | 2025-04-09 10:16:18 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:18.127042 | orchestrator | 2025-04-09 10:16:18 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:18.127064 | orchestrator | 2025-04-09 10:16:18 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:18.128557 | orchestrator | 2025-04-09 10:16:18 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:21.179051 | orchestrator | 2025-04-09 10:16:18 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:21.179151 | orchestrator | 2025-04-09 10:16:21 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:21.183855 | orchestrator | 2025-04-09 10:16:21 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:21.188770 | orchestrator | 2025-04-09 10:16:21 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:21.188792 | orchestrator | 2025-04-09 10:16:21 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:21.190113 | orchestrator | 2025-04-09 10:16:21 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:21.190369 | orchestrator | 2025-04-09 10:16:21 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:24.250830 | orchestrator | 2025-04-09 10:16:24 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:24.251339 | orchestrator | 2025-04-09 10:16:24 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:24.254988 | orchestrator | 2025-04-09 10:16:24 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:24.255294 | orchestrator | 2025-04-09 10:16:24 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state STARTED 2025-04-09 10:16:24.255322 | orchestrator | 2025-04-09 10:16:24 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:24.255343 | orchestrator | 2025-04-09 10:16:24 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:27.304888 | orchestrator | 2025-04-09 10:16:27 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:27.307338 | orchestrator | 2025-04-09 10:16:27 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:27.309451 | orchestrator | 2025-04-09 10:16:27 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:27.310203 | orchestrator | 2025-04-09 10:16:27 | INFO  | Task 4dde6773-a0a3-4dfe-8d7e-6cdf46bc0703 is in state SUCCESS 2025-04-09 10:16:27.319371 | orchestrator | 2025-04-09 10:16:27.319441 | orchestrator | 2025-04-09 10:16:27.319455 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-09 10:16:27.319469 | orchestrator | 2025-04-09 10:16:27.319482 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-09 10:16:27.319518 | orchestrator | Wednesday 09 April 2025 10:15:15 +0000 (0:00:00.650) 0:00:00.650 ******* 2025-04-09 10:16:27.319532 | orchestrator | ok: [testbed-manager] => { 2025-04-09 10:16:27.319548 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-09 10:16:27.319563 | orchestrator | } 2025-04-09 10:16:27.319576 | orchestrator | 2025-04-09 10:16:27.319589 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-09 10:16:27.319602 | orchestrator | Wednesday 09 April 2025 10:15:16 +0000 (0:00:00.519) 0:00:01.169 ******* 2025-04-09 10:16:27.319614 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.319628 | orchestrator | 2025-04-09 10:16:27.319641 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-09 10:16:27.319653 | orchestrator | Wednesday 09 April 2025 10:15:17 +0000 (0:00:01.349) 0:00:02.519 ******* 2025-04-09 10:16:27.319666 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-09 10:16:27.319678 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-09 10:16:27.319691 | orchestrator | 2025-04-09 10:16:27.319703 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-09 10:16:27.319716 | orchestrator | Wednesday 09 April 2025 10:15:18 +0000 (0:00:00.988) 0:00:03.508 ******* 2025-04-09 10:16:27.319728 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.319741 | orchestrator | 2025-04-09 10:16:27.319753 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-09 10:16:27.319766 | orchestrator | Wednesday 09 April 2025 10:15:21 +0000 (0:00:02.972) 0:00:06.480 ******* 2025-04-09 10:16:27.319778 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.319791 | orchestrator | 2025-04-09 10:16:27.319803 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-09 10:16:27.319816 | orchestrator | Wednesday 09 April 2025 10:15:23 +0000 (0:00:01.792) 0:00:08.272 ******* 2025-04-09 10:16:27.319828 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-09 10:16:27.319841 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.319854 | orchestrator | 2025-04-09 10:16:27.319866 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-09 10:16:27.319879 | orchestrator | Wednesday 09 April 2025 10:15:48 +0000 (0:00:25.437) 0:00:33.710 ******* 2025-04-09 10:16:27.319891 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.319904 | orchestrator | 2025-04-09 10:16:27.319925 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:16:27.319940 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.319956 | orchestrator | 2025-04-09 10:16:27.319970 | orchestrator | 2025-04-09 10:16:27.319984 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:16:27.319998 | orchestrator | Wednesday 09 April 2025 10:15:51 +0000 (0:00:02.556) 0:00:36.267 ******* 2025-04-09 10:16:27.320012 | orchestrator | =============================================================================== 2025-04-09 10:16:27.320025 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.44s 2025-04-09 10:16:27.320039 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.97s 2025-04-09 10:16:27.320053 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.56s 2025-04-09 10:16:27.320067 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.79s 2025-04-09 10:16:27.320081 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.35s 2025-04-09 10:16:27.320094 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.99s 2025-04-09 10:16:27.320108 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.52s 2025-04-09 10:16:27.320122 | orchestrator | 2025-04-09 10:16:27.320142 | orchestrator | 2025-04-09 10:16:27.320157 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-09 10:16:27.320171 | orchestrator | 2025-04-09 10:16:27.320184 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-09 10:16:27.320198 | orchestrator | Wednesday 09 April 2025 10:15:15 +0000 (0:00:00.645) 0:00:00.645 ******* 2025-04-09 10:16:27.320212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-09 10:16:27.320287 | orchestrator | 2025-04-09 10:16:27.320302 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-09 10:16:27.320316 | orchestrator | Wednesday 09 April 2025 10:15:15 +0000 (0:00:00.210) 0:00:00.856 ******* 2025-04-09 10:16:27.320328 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-09 10:16:27.320340 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-09 10:16:27.320353 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-09 10:16:27.320365 | orchestrator | 2025-04-09 10:16:27.320378 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-09 10:16:27.320390 | orchestrator | Wednesday 09 April 2025 10:15:17 +0000 (0:00:01.431) 0:00:02.287 ******* 2025-04-09 10:16:27.320402 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.320415 | orchestrator | 2025-04-09 10:16:27.320428 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-09 10:16:27.320440 | orchestrator | Wednesday 09 April 2025 10:15:18 +0000 (0:00:01.767) 0:00:04.055 ******* 2025-04-09 10:16:27.320463 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-09 10:16:27.320477 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.320489 | orchestrator | 2025-04-09 10:16:27.320502 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-09 10:16:27.320514 | orchestrator | Wednesday 09 April 2025 10:16:01 +0000 (0:00:42.519) 0:00:46.574 ******* 2025-04-09 10:16:27.320527 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.320539 | orchestrator | 2025-04-09 10:16:27.320552 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-09 10:16:27.320562 | orchestrator | Wednesday 09 April 2025 10:16:03 +0000 (0:00:01.529) 0:00:48.104 ******* 2025-04-09 10:16:27.320572 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.320582 | orchestrator | 2025-04-09 10:16:27.320593 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-09 10:16:27.320603 | orchestrator | Wednesday 09 April 2025 10:16:04 +0000 (0:00:01.798) 0:00:49.903 ******* 2025-04-09 10:16:27.320613 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.320623 | orchestrator | 2025-04-09 10:16:27.320634 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-09 10:16:27.320644 | orchestrator | Wednesday 09 April 2025 10:16:07 +0000 (0:00:02.744) 0:00:52.647 ******* 2025-04-09 10:16:27.320654 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.320664 | orchestrator | 2025-04-09 10:16:27.320674 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-09 10:16:27.320684 | orchestrator | Wednesday 09 April 2025 10:16:08 +0000 (0:00:01.010) 0:00:53.658 ******* 2025-04-09 10:16:27.320695 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.320705 | orchestrator | 2025-04-09 10:16:27.320715 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-09 10:16:27.320729 | orchestrator | Wednesday 09 April 2025 10:16:09 +0000 (0:00:00.719) 0:00:54.378 ******* 2025-04-09 10:16:27.320740 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.320750 | orchestrator | 2025-04-09 10:16:27.320760 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:16:27.320770 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.320788 | orchestrator | 2025-04-09 10:16:27.320798 | orchestrator | 2025-04-09 10:16:27.320809 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:16:27.320819 | orchestrator | Wednesday 09 April 2025 10:16:09 +0000 (0:00:00.445) 0:00:54.823 ******* 2025-04-09 10:16:27.320829 | orchestrator | =============================================================================== 2025-04-09 10:16:27.320839 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 42.52s 2025-04-09 10:16:27.320850 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.75s 2025-04-09 10:16:27.320860 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.80s 2025-04-09 10:16:27.320870 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.77s 2025-04-09 10:16:27.320880 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.53s 2025-04-09 10:16:27.320891 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.43s 2025-04-09 10:16:27.320902 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.01s 2025-04-09 10:16:27.320912 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.72s 2025-04-09 10:16:27.320922 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2025-04-09 10:16:27.320932 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.21s 2025-04-09 10:16:27.320943 | orchestrator | 2025-04-09 10:16:27.320953 | orchestrator | 2025-04-09 10:16:27.320963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 10:16:27.320974 | orchestrator | 2025-04-09 10:16:27.320984 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-09 10:16:27.320994 | orchestrator | Wednesday 09 April 2025 10:15:15 +0000 (0:00:00.444) 0:00:00.444 ******* 2025-04-09 10:16:27.321004 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-09 10:16:27.321015 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-09 10:16:27.321025 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-09 10:16:27.321035 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-09 10:16:27.321045 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-09 10:16:27.321055 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-09 10:16:27.321065 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-09 10:16:27.321075 | orchestrator | 2025-04-09 10:16:27.321086 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-09 10:16:27.321096 | orchestrator | 2025-04-09 10:16:27.321106 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-09 10:16:27.321116 | orchestrator | Wednesday 09 April 2025 10:15:18 +0000 (0:00:02.158) 0:00:02.603 ******* 2025-04-09 10:16:27.321141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 10:16:27.321154 | orchestrator | 2025-04-09 10:16:27.321164 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-09 10:16:27.321174 | orchestrator | Wednesday 09 April 2025 10:15:21 +0000 (0:00:03.863) 0:00:06.466 ******* 2025-04-09 10:16:27.321185 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.321195 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:16:27.321205 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:16:27.321215 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:16:27.321240 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:16:27.321255 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:16:27.321265 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:16:27.321275 | orchestrator | 2025-04-09 10:16:27.321286 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-09 10:16:27.321306 | orchestrator | Wednesday 09 April 2025 10:15:24 +0000 (0:00:02.881) 0:00:09.347 ******* 2025-04-09 10:16:27.321317 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.321327 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:16:27.321337 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:16:27.321347 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:16:27.321357 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:16:27.321367 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:16:27.321377 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:16:27.321387 | orchestrator | 2025-04-09 10:16:27.321398 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-09 10:16:27.321408 | orchestrator | Wednesday 09 April 2025 10:15:28 +0000 (0:00:03.824) 0:00:13.172 ******* 2025-04-09 10:16:27.321418 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.321429 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:16:27.321439 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:16:27.321453 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:16:27.321463 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:16:27.321474 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:16:27.321484 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:16:27.321494 | orchestrator | 2025-04-09 10:16:27.321504 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-09 10:16:27.321514 | orchestrator | Wednesday 09 April 2025 10:15:31 +0000 (0:00:02.711) 0:00:15.884 ******* 2025-04-09 10:16:27.321524 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.321534 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:16:27.321545 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:16:27.321555 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:16:27.321565 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:16:27.321575 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:16:27.321585 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:16:27.321595 | orchestrator | 2025-04-09 10:16:27.321605 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-09 10:16:27.321616 | orchestrator | Wednesday 09 April 2025 10:15:40 +0000 (0:00:09.542) 0:00:25.426 ******* 2025-04-09 10:16:27.321626 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:16:27.321636 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:16:27.321646 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:16:27.321656 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:16:27.321666 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:16:27.321676 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:16:27.321686 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.321697 | orchestrator | 2025-04-09 10:16:27.321710 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-09 10:16:27.321721 | orchestrator | Wednesday 09 April 2025 10:15:59 +0000 (0:00:18.885) 0:00:44.311 ******* 2025-04-09 10:16:27.321732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 10:16:27.321746 | orchestrator | 2025-04-09 10:16:27.321756 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-09 10:16:27.321767 | orchestrator | Wednesday 09 April 2025 10:16:01 +0000 (0:00:01.718) 0:00:46.030 ******* 2025-04-09 10:16:27.321777 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-09 10:16:27.321787 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-09 10:16:27.321798 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-09 10:16:27.321808 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-09 10:16:27.321818 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-09 10:16:27.321829 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-09 10:16:27.321839 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-09 10:16:27.321849 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-09 10:16:27.321865 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-09 10:16:27.321875 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-09 10:16:27.321885 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-09 10:16:27.321896 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-09 10:16:27.321906 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-09 10:16:27.321916 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-09 10:16:27.321926 | orchestrator | 2025-04-09 10:16:27.321937 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-09 10:16:27.321947 | orchestrator | Wednesday 09 April 2025 10:16:09 +0000 (0:00:07.852) 0:00:53.883 ******* 2025-04-09 10:16:27.321957 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.321968 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:16:27.321978 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:16:27.321988 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:16:27.321998 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:16:27.322008 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:16:27.322062 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:16:27.322075 | orchestrator | 2025-04-09 10:16:27.322085 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-09 10:16:27.322096 | orchestrator | Wednesday 09 April 2025 10:16:10 +0000 (0:00:01.675) 0:00:55.558 ******* 2025-04-09 10:16:27.322106 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.322116 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:16:27.322126 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:16:27.322137 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:16:27.322147 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:16:27.322158 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:16:27.322168 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:16:27.322178 | orchestrator | 2025-04-09 10:16:27.322188 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-09 10:16:27.322205 | orchestrator | Wednesday 09 April 2025 10:16:13 +0000 (0:00:02.281) 0:00:57.839 ******* 2025-04-09 10:16:27.322216 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.322238 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:16:27.322249 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:16:27.322259 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:16:27.322270 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:16:27.322280 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:16:27.322290 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:16:27.322300 | orchestrator | 2025-04-09 10:16:27.322310 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-09 10:16:27.322321 | orchestrator | Wednesday 09 April 2025 10:16:15 +0000 (0:00:02.292) 0:01:00.132 ******* 2025-04-09 10:16:27.322331 | orchestrator | ok: [testbed-manager] 2025-04-09 10:16:27.322341 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:16:27.322352 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:16:27.322362 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:16:27.322372 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:16:27.322382 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:16:27.322392 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:16:27.322402 | orchestrator | 2025-04-09 10:16:27.322413 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-09 10:16:27.322423 | orchestrator | Wednesday 09 April 2025 10:16:18 +0000 (0:00:02.460) 0:01:02.592 ******* 2025-04-09 10:16:27.322433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-09 10:16:27.323723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 10:16:27.323756 | orchestrator | 2025-04-09 10:16:27.323766 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-09 10:16:27.323787 | orchestrator | Wednesday 09 April 2025 10:16:19 +0000 (0:00:01.861) 0:01:04.454 ******* 2025-04-09 10:16:27.323796 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.323805 | orchestrator | 2025-04-09 10:16:27.323814 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-09 10:16:27.323823 | orchestrator | Wednesday 09 April 2025 10:16:22 +0000 (0:00:02.198) 0:01:06.652 ******* 2025-04-09 10:16:27.323831 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:16:27.323841 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:16:27.323849 | orchestrator | changed: [testbed-manager] 2025-04-09 10:16:27.323858 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:16:27.323867 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:16:27.323885 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:16:27.323895 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:16:27.323903 | orchestrator | 2025-04-09 10:16:27.323912 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:16:27.323921 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.323931 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.323940 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.323954 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.323963 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.323972 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.323980 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:16:27.323989 | orchestrator | 2025-04-09 10:16:27.323997 | orchestrator | 2025-04-09 10:16:27.324006 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:16:27.324015 | orchestrator | Wednesday 09 April 2025 10:16:24 +0000 (0:00:02.817) 0:01:09.469 ******* 2025-04-09 10:16:27.324023 | orchestrator | =============================================================================== 2025-04-09 10:16:27.324032 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.89s 2025-04-09 10:16:27.324041 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.54s 2025-04-09 10:16:27.324049 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.85s 2025-04-09 10:16:27.324058 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.86s 2025-04-09 10:16:27.324066 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.82s 2025-04-09 10:16:27.324075 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.88s 2025-04-09 10:16:27.324083 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.82s 2025-04-09 10:16:27.324092 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.71s 2025-04-09 10:16:27.324100 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.46s 2025-04-09 10:16:27.324109 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.29s 2025-04-09 10:16:27.324118 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.28s 2025-04-09 10:16:27.324138 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.20s 2025-04-09 10:16:30.361977 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.16s 2025-04-09 10:16:30.362220 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.86s 2025-04-09 10:16:30.362280 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.72s 2025-04-09 10:16:30.362294 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.68s 2025-04-09 10:16:30.362310 | orchestrator | 2025-04-09 10:16:27 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:30.362323 | orchestrator | 2025-04-09 10:16:27 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:30.362371 | orchestrator | 2025-04-09 10:16:30 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:30.362459 | orchestrator | 2025-04-09 10:16:30 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:30.363213 | orchestrator | 2025-04-09 10:16:30 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:30.366005 | orchestrator | 2025-04-09 10:16:30 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:33.411035 | orchestrator | 2025-04-09 10:16:30 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:33.411162 | orchestrator | 2025-04-09 10:16:33 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:33.412178 | orchestrator | 2025-04-09 10:16:33 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:33.414252 | orchestrator | 2025-04-09 10:16:33 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:33.416825 | orchestrator | 2025-04-09 10:16:33 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:36.466178 | orchestrator | 2025-04-09 10:16:33 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:36.466341 | orchestrator | 2025-04-09 10:16:36 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:36.467832 | orchestrator | 2025-04-09 10:16:36 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:36.468809 | orchestrator | 2025-04-09 10:16:36 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:36.472813 | orchestrator | 2025-04-09 10:16:36 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:39.530443 | orchestrator | 2025-04-09 10:16:36 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:39.530550 | orchestrator | 2025-04-09 10:16:39 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:39.533393 | orchestrator | 2025-04-09 10:16:39 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:39.537259 | orchestrator | 2025-04-09 10:16:39 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:39.540755 | orchestrator | 2025-04-09 10:16:39 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:42.604754 | orchestrator | 2025-04-09 10:16:39 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:42.604881 | orchestrator | 2025-04-09 10:16:42 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state STARTED 2025-04-09 10:16:42.605688 | orchestrator | 2025-04-09 10:16:42 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:42.606719 | orchestrator | 2025-04-09 10:16:42 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:42.608871 | orchestrator | 2025-04-09 10:16:42 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:45.679170 | orchestrator | 2025-04-09 10:16:42 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:45.679309 | orchestrator | 2025-04-09 10:16:45 | INFO  | Task d5ea9e7a-bdc9-4bb9-9a84-c9ad305b28c6 is in state SUCCESS 2025-04-09 10:16:45.685950 | orchestrator | 2025-04-09 10:16:45 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:45.688095 | orchestrator | 2025-04-09 10:16:45 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:45.688125 | orchestrator | 2025-04-09 10:16:45 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:45.690759 | orchestrator | 2025-04-09 10:16:45 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:48.741133 | orchestrator | 2025-04-09 10:16:48 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:48.741955 | orchestrator | 2025-04-09 10:16:48 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:48.742823 | orchestrator | 2025-04-09 10:16:48 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:48.743080 | orchestrator | 2025-04-09 10:16:48 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:51.788968 | orchestrator | 2025-04-09 10:16:51 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:51.789529 | orchestrator | 2025-04-09 10:16:51 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:51.790702 | orchestrator | 2025-04-09 10:16:51 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:51.790826 | orchestrator | 2025-04-09 10:16:51 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:54.833018 | orchestrator | 2025-04-09 10:16:54 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:54.833191 | orchestrator | 2025-04-09 10:16:54 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:54.834165 | orchestrator | 2025-04-09 10:16:54 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:57.896515 | orchestrator | 2025-04-09 10:16:54 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:16:57.896652 | orchestrator | 2025-04-09 10:16:57 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:16:57.898078 | orchestrator | 2025-04-09 10:16:57 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:16:57.900565 | orchestrator | 2025-04-09 10:16:57 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:16:57.900817 | orchestrator | 2025-04-09 10:16:57 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:00.953201 | orchestrator | 2025-04-09 10:17:00 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:00.956593 | orchestrator | 2025-04-09 10:17:00 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:00.956633 | orchestrator | 2025-04-09 10:17:00 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:04.020557 | orchestrator | 2025-04-09 10:17:00 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:04.020678 | orchestrator | 2025-04-09 10:17:04 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:04.023045 | orchestrator | 2025-04-09 10:17:04 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:04.029349 | orchestrator | 2025-04-09 10:17:04 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:07.079469 | orchestrator | 2025-04-09 10:17:04 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:07.079614 | orchestrator | 2025-04-09 10:17:07 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:07.079696 | orchestrator | 2025-04-09 10:17:07 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:07.080184 | orchestrator | 2025-04-09 10:17:07 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:10.119507 | orchestrator | 2025-04-09 10:17:07 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:10.119644 | orchestrator | 2025-04-09 10:17:10 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:10.119727 | orchestrator | 2025-04-09 10:17:10 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:10.120339 | orchestrator | 2025-04-09 10:17:10 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:10.120406 | orchestrator | 2025-04-09 10:17:10 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:13.156659 | orchestrator | 2025-04-09 10:17:13 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:13.157050 | orchestrator | 2025-04-09 10:17:13 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:13.157685 | orchestrator | 2025-04-09 10:17:13 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:13.158084 | orchestrator | 2025-04-09 10:17:13 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:16.202898 | orchestrator | 2025-04-09 10:17:16 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:16.204189 | orchestrator | 2025-04-09 10:17:16 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:16.204874 | orchestrator | 2025-04-09 10:17:16 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:19.262849 | orchestrator | 2025-04-09 10:17:16 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:19.262984 | orchestrator | 2025-04-09 10:17:19 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:19.264782 | orchestrator | 2025-04-09 10:17:19 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:19.265493 | orchestrator | 2025-04-09 10:17:19 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:22.315618 | orchestrator | 2025-04-09 10:17:19 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:22.315744 | orchestrator | 2025-04-09 10:17:22 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:22.316918 | orchestrator | 2025-04-09 10:17:22 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:22.317730 | orchestrator | 2025-04-09 10:17:22 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:25.362590 | orchestrator | 2025-04-09 10:17:22 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:25.362720 | orchestrator | 2025-04-09 10:17:25 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:25.363070 | orchestrator | 2025-04-09 10:17:25 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:25.364014 | orchestrator | 2025-04-09 10:17:25 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:25.364310 | orchestrator | 2025-04-09 10:17:25 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:28.413373 | orchestrator | 2025-04-09 10:17:28 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:28.414281 | orchestrator | 2025-04-09 10:17:28 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:28.415057 | orchestrator | 2025-04-09 10:17:28 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:31.474519 | orchestrator | 2025-04-09 10:17:28 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:31.474657 | orchestrator | 2025-04-09 10:17:31 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:31.477066 | orchestrator | 2025-04-09 10:17:31 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:31.479761 | orchestrator | 2025-04-09 10:17:31 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:31.479828 | orchestrator | 2025-04-09 10:17:31 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:34.547177 | orchestrator | 2025-04-09 10:17:34 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:34.547383 | orchestrator | 2025-04-09 10:17:34 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:34.548929 | orchestrator | 2025-04-09 10:17:34 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:37.603973 | orchestrator | 2025-04-09 10:17:34 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:37.604106 | orchestrator | 2025-04-09 10:17:37 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:37.605772 | orchestrator | 2025-04-09 10:17:37 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:37.605805 | orchestrator | 2025-04-09 10:17:37 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:40.659795 | orchestrator | 2025-04-09 10:17:37 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:40.659920 | orchestrator | 2025-04-09 10:17:40 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:40.660422 | orchestrator | 2025-04-09 10:17:40 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:40.661964 | orchestrator | 2025-04-09 10:17:40 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state STARTED 2025-04-09 10:17:43.742349 | orchestrator | 2025-04-09 10:17:40 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:43.742494 | orchestrator | 2025-04-09 10:17:43 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:43.742589 | orchestrator | 2025-04-09 10:17:43 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:17:43.745187 | orchestrator | 2025-04-09 10:17:43 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:17:43.748697 | orchestrator | 2025-04-09 10:17:43 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:43.752354 | orchestrator | 2025-04-09 10:17:43 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:17:43.752981 | orchestrator | 2025-04-09 10:17:43 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:17:43.762802 | orchestrator | 2025-04-09 10:17:43 | INFO  | Task 37dc1ca3-dcdc-4b97-99d6-7e6b16484cb6 is in state SUCCESS 2025-04-09 10:17:43.764576 | orchestrator | 2025-04-09 10:17:43.764612 | orchestrator | 2025-04-09 10:17:43.764651 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-09 10:17:43.764666 | orchestrator | 2025-04-09 10:17:43.764681 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-09 10:17:43.764695 | orchestrator | Wednesday 09 April 2025 10:15:40 +0000 (0:00:00.365) 0:00:00.365 ******* 2025-04-09 10:17:43.764709 | orchestrator | ok: [testbed-manager] 2025-04-09 10:17:43.764725 | orchestrator | 2025-04-09 10:17:43.764739 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-09 10:17:43.764754 | orchestrator | Wednesday 09 April 2025 10:15:42 +0000 (0:00:02.175) 0:00:02.540 ******* 2025-04-09 10:17:43.764768 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-09 10:17:43.764782 | orchestrator | 2025-04-09 10:17:43.764797 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-09 10:17:43.764811 | orchestrator | Wednesday 09 April 2025 10:15:43 +0000 (0:00:01.666) 0:00:04.206 ******* 2025-04-09 10:17:43.764826 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.764840 | orchestrator | 2025-04-09 10:17:43.764854 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-09 10:17:43.764868 | orchestrator | Wednesday 09 April 2025 10:15:45 +0000 (0:00:01.959) 0:00:06.166 ******* 2025-04-09 10:17:43.764882 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-09 10:17:43.764896 | orchestrator | ok: [testbed-manager] 2025-04-09 10:17:43.764910 | orchestrator | 2025-04-09 10:17:43.764925 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-09 10:17:43.764939 | orchestrator | Wednesday 09 April 2025 10:16:41 +0000 (0:00:55.087) 0:01:01.253 ******* 2025-04-09 10:17:43.764953 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.764966 | orchestrator | 2025-04-09 10:17:43.764980 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:17:43.764994 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:17:43.765010 | orchestrator | 2025-04-09 10:17:43.765024 | orchestrator | 2025-04-09 10:17:43.765038 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:17:43.765052 | orchestrator | Wednesday 09 April 2025 10:16:44 +0000 (0:00:03.817) 0:01:05.071 ******* 2025-04-09 10:17:43.765066 | orchestrator | =============================================================================== 2025-04-09 10:17:43.765080 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 55.09s 2025-04-09 10:17:43.765094 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.82s 2025-04-09 10:17:43.765108 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.18s 2025-04-09 10:17:43.765122 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.96s 2025-04-09 10:17:43.765136 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.67s 2025-04-09 10:17:43.765150 | orchestrator | 2025-04-09 10:17:43.765164 | orchestrator | 2025-04-09 10:17:43.765178 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-09 10:17:43.765194 | orchestrator | 2025-04-09 10:17:43.765209 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-09 10:17:43.765225 | orchestrator | Wednesday 09 April 2025 10:15:06 +0000 (0:00:00.407) 0:00:00.407 ******* 2025-04-09 10:17:43.765263 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 10:17:43.765281 | orchestrator | 2025-04-09 10:17:43.765298 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-09 10:17:43.765314 | orchestrator | Wednesday 09 April 2025 10:15:08 +0000 (0:00:01.832) 0:00:02.239 ******* 2025-04-09 10:17:43.765330 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-09 10:17:43.765364 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-09 10:17:43.765388 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-09 10:17:43.765405 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-09 10:17:43.765421 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-09 10:17:43.765438 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-09 10:17:43.765454 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-09 10:17:43.765471 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-09 10:17:43.765487 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-09 10:17:43.765504 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-09 10:17:43.765520 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-09 10:17:43.765536 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-09 10:17:43.765552 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-09 10:17:43.765567 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-09 10:17:43.765582 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-09 10:17:43.765597 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-09 10:17:43.765623 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-09 10:17:43.765640 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-09 10:17:43.765660 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-09 10:17:43.765676 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-09 10:17:43.765691 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-09 10:17:43.765705 | orchestrator | 2025-04-09 10:17:43.765720 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-09 10:17:43.765735 | orchestrator | Wednesday 09 April 2025 10:15:13 +0000 (0:00:04.740) 0:00:06.980 ******* 2025-04-09 10:17:43.765750 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 10:17:43.765821 | orchestrator | 2025-04-09 10:17:43.765838 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-09 10:17:43.765853 | orchestrator | Wednesday 09 April 2025 10:15:14 +0000 (0:00:01.629) 0:00:08.610 ******* 2025-04-09 10:17:43.765872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.765893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.765917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.765933 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.765949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.765965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.765989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.766006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766082 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766222 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.766344 | orchestrator | 2025-04-09 10:17:43.766359 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-09 10:17:43.766373 | orchestrator | Wednesday 09 April 2025 10:15:19 +0000 (0:00:04.784) 0:00:13.395 ******* 2025-04-09 10:17:43.766409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766463 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:17:43.766479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766494 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766510 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766524 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:17:43.766539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766625 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:17:43.766640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766693 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:17:43.766708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766753 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:17:43.766769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766846 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:17:43.766861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.766906 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:17:43.766929 | orchestrator | 2025-04-09 10:17:43.766945 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-09 10:17:43.766960 | orchestrator | Wednesday 09 April 2025 10:15:22 +0000 (0:00:02.282) 0:00:15.677 ******* 2025-04-09 10:17:43.766975 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.766991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.767020 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767062 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767111 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:17:43.767126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.767142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.767196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-too2025-04-09 10:17:43 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:43.767221 | orchestrator | lbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767306 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:17:43.767321 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:17:43.767335 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:17:43.767349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.767365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767393 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:17:43.767414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.767429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767474 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:17:43.767489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-09 10:17:43.767503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.767532 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:17:43.767546 | orchestrator | 2025-04-09 10:17:43.767560 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-09 10:17:43.767575 | orchestrator | Wednesday 09 April 2025 10:15:24 +0000 (0:00:02.475) 0:00:18.153 ******* 2025-04-09 10:17:43.767589 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:17:43.767603 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:17:43.767617 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:17:43.767631 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:17:43.767645 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:17:43.767658 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:17:43.767672 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:17:43.767686 | orchestrator | 2025-04-09 10:17:43.767700 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-09 10:17:43.767714 | orchestrator | Wednesday 09 April 2025 10:15:26 +0000 (0:00:01.732) 0:00:19.885 ******* 2025-04-09 10:17:43.767728 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:17:43.767742 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:17:43.767755 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:17:43.767769 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:17:43.767783 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:17:43.767797 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:17:43.767811 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:17:43.767825 | orchestrator | 2025-04-09 10:17:43.767839 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-09 10:17:43.767853 | orchestrator | Wednesday 09 April 2025 10:15:28 +0000 (0:00:01.950) 0:00:21.836 ******* 2025-04-09 10:17:43.767867 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:17:43.767881 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.767902 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.767915 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.767929 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.767943 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.767957 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.767971 | orchestrator | 2025-04-09 10:17:43.767985 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-09 10:17:43.767999 | orchestrator | Wednesday 09 April 2025 10:16:10 +0000 (0:00:42.265) 0:01:04.101 ******* 2025-04-09 10:17:43.768013 | orchestrator | ok: [testbed-manager] 2025-04-09 10:17:43.768027 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:17:43.768040 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:17:43.768054 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:17:43.768068 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:17:43.768082 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:17:43.768095 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:17:43.768109 | orchestrator | 2025-04-09 10:17:43.768123 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-09 10:17:43.768137 | orchestrator | Wednesday 09 April 2025 10:16:13 +0000 (0:00:03.001) 0:01:07.103 ******* 2025-04-09 10:17:43.768151 | orchestrator | ok: [testbed-manager] 2025-04-09 10:17:43.768164 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:17:43.768183 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:17:43.768197 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:17:43.768211 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:17:43.768238 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:17:43.768267 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:17:43.768281 | orchestrator | 2025-04-09 10:17:43.768296 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-09 10:17:43.768310 | orchestrator | Wednesday 09 April 2025 10:16:15 +0000 (0:00:02.048) 0:01:09.152 ******* 2025-04-09 10:17:43.768324 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:17:43.768339 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:17:43.768360 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:17:43.768374 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:17:43.768388 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:17:43.768402 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:17:43.768416 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:17:43.768430 | orchestrator | 2025-04-09 10:17:43.768444 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-09 10:17:43.768458 | orchestrator | Wednesday 09 April 2025 10:16:16 +0000 (0:00:01.452) 0:01:10.605 ******* 2025-04-09 10:17:43.768473 | orchestrator | skipping: [testbed-manager] 2025-04-09 10:17:43.768486 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:17:43.768500 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:17:43.768514 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:17:43.768527 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:17:43.768541 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:17:43.768555 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:17:43.768569 | orchestrator | 2025-04-09 10:17:43.768583 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-09 10:17:43.768597 | orchestrator | Wednesday 09 April 2025 10:16:17 +0000 (0:00:00.860) 0:01:11.466 ******* 2025-04-09 10:17:43.768611 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.768627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.768648 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.768683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.768698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.768750 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768800 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.768815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.768830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.768980 | orchestrator | 2025-04-09 10:17:43.768994 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-09 10:17:43.769008 | orchestrator | Wednesday 09 April 2025 10:16:23 +0000 (0:00:05.247) 0:01:16.713 ******* 2025-04-09 10:17:43.769022 | orchestrator | [WARNING]: Skipped 2025-04-09 10:17:43.769037 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-09 10:17:43.769051 | orchestrator | to this access issue: 2025-04-09 10:17:43.769065 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-09 10:17:43.769079 | orchestrator | directory 2025-04-09 10:17:43.769093 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-09 10:17:43.769107 | orchestrator | 2025-04-09 10:17:43.769121 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-09 10:17:43.769135 | orchestrator | Wednesday 09 April 2025 10:16:23 +0000 (0:00:00.846) 0:01:17.560 ******* 2025-04-09 10:17:43.769155 | orchestrator | [WARNING]: Skipped 2025-04-09 10:17:43.769170 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-09 10:17:43.769184 | orchestrator | to this access issue: 2025-04-09 10:17:43.769198 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-09 10:17:43.769212 | orchestrator | directory 2025-04-09 10:17:43.769226 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-09 10:17:43.769258 | orchestrator | 2025-04-09 10:17:43.769273 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-09 10:17:43.769294 | orchestrator | Wednesday 09 April 2025 10:16:24 +0000 (0:00:00.458) 0:01:18.019 ******* 2025-04-09 10:17:43.769308 | orchestrator | [WARNING]: Skipped 2025-04-09 10:17:43.769322 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-09 10:17:43.769336 | orchestrator | to this access issue: 2025-04-09 10:17:43.769350 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-09 10:17:43.769364 | orchestrator | directory 2025-04-09 10:17:43.769378 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-09 10:17:43.769392 | orchestrator | 2025-04-09 10:17:43.769406 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-09 10:17:43.769420 | orchestrator | Wednesday 09 April 2025 10:16:24 +0000 (0:00:00.521) 0:01:18.541 ******* 2025-04-09 10:17:43.769434 | orchestrator | [WARNING]: Skipped 2025-04-09 10:17:43.769448 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-09 10:17:43.769462 | orchestrator | to this access issue: 2025-04-09 10:17:43.769476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-09 10:17:43.769490 | orchestrator | directory 2025-04-09 10:17:43.769504 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-09 10:17:43.769518 | orchestrator | 2025-04-09 10:17:43.769532 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-09 10:17:43.769546 | orchestrator | Wednesday 09 April 2025 10:16:25 +0000 (0:00:00.624) 0:01:19.165 ******* 2025-04-09 10:17:43.769560 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.769574 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:17:43.769587 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.769601 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.769615 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.769629 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.769643 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.769657 | orchestrator | 2025-04-09 10:17:43.769671 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-09 10:17:43.769685 | orchestrator | Wednesday 09 April 2025 10:16:29 +0000 (0:00:04.088) 0:01:23.253 ******* 2025-04-09 10:17:43.769699 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-09 10:17:43.769714 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-09 10:17:43.769728 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-09 10:17:43.769742 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-09 10:17:43.769756 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-09 10:17:43.769770 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-09 10:17:43.769784 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-09 10:17:43.769798 | orchestrator | 2025-04-09 10:17:43.769812 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-09 10:17:43.769827 | orchestrator | Wednesday 09 April 2025 10:16:32 +0000 (0:00:02.562) 0:01:25.816 ******* 2025-04-09 10:17:43.769841 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.769855 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:17:43.769869 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.769883 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.769897 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.769911 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.769925 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.769939 | orchestrator | 2025-04-09 10:17:43.769953 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-09 10:17:43.769974 | orchestrator | Wednesday 09 April 2025 10:16:34 +0000 (0:00:02.194) 0:01:28.011 ******* 2025-04-09 10:17:43.769989 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.770063 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.770094 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.770123 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770164 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.770205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770220 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.770275 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.770315 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770381 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:17:43.770420 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770434 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770448 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770463 | orchestrator | 2025-04-09 10:17:43.770477 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-09 10:17:43.770491 | orchestrator | Wednesday 09 April 2025 10:16:36 +0000 (0:00:02.118) 0:01:30.130 ******* 2025-04-09 10:17:43.770505 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-09 10:17:43.770519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-09 10:17:43.770533 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-09 10:17:43.770547 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-09 10:17:43.770569 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-09 10:17:43.770583 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-09 10:17:43.770597 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-09 10:17:43.770611 | orchestrator | 2025-04-09 10:17:43.770625 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-09 10:17:43.770644 | orchestrator | Wednesday 09 April 2025 10:16:38 +0000 (0:00:02.362) 0:01:32.492 ******* 2025-04-09 10:17:43.770659 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-09 10:17:43.770673 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-09 10:17:43.770687 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-09 10:17:43.770701 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-09 10:17:43.770715 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-09 10:17:43.770729 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-09 10:17:43.770743 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-09 10:17:43.770757 | orchestrator | 2025-04-09 10:17:43.770771 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-09 10:17:43.770785 | orchestrator | Wednesday 09 April 2025 10:16:41 +0000 (0:00:02.271) 0:01:34.764 ******* 2025-04-09 10:17:43.770799 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770852 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.770939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770954 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.770983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-09 10:17:43.771004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:17:43.771153 | orchestrator | 2025-04-09 10:17:43.771168 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-09 10:17:43.771182 | orchestrator | Wednesday 09 April 2025 10:16:44 +0000 (0:00:03.816) 0:01:38.580 ******* 2025-04-09 10:17:43.771196 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.771210 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:17:43.771224 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.771238 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.771277 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.771291 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.771305 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.771319 | orchestrator | 2025-04-09 10:17:43.771333 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-09 10:17:43.771347 | orchestrator | Wednesday 09 April 2025 10:16:46 +0000 (0:00:02.037) 0:01:40.618 ******* 2025-04-09 10:17:43.771361 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.771375 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:17:43.771389 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.771409 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.771422 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.771436 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.771450 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.771464 | orchestrator | 2025-04-09 10:17:43.771478 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-09 10:17:43.771492 | orchestrator | Wednesday 09 April 2025 10:16:48 +0000 (0:00:01.284) 0:01:41.902 ******* 2025-04-09 10:17:43.771506 | orchestrator | 2025-04-09 10:17:43.771520 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-09 10:17:43.771534 | orchestrator | Wednesday 09 April 2025 10:16:48 +0000 (0:00:00.248) 0:01:42.151 ******* 2025-04-09 10:17:43.771548 | orchestrator | 2025-04-09 10:17:43.771562 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-09 10:17:43.771576 | orchestrator | Wednesday 09 April 2025 10:16:48 +0000 (0:00:00.052) 0:01:42.203 ******* 2025-04-09 10:17:43.771590 | orchestrator | 2025-04-09 10:17:43.771604 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-09 10:17:43.771617 | orchestrator | Wednesday 09 April 2025 10:16:48 +0000 (0:00:00.053) 0:01:42.257 ******* 2025-04-09 10:17:43.771631 | orchestrator | 2025-04-09 10:17:43.771645 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-09 10:17:43.771659 | orchestrator | Wednesday 09 April 2025 10:16:48 +0000 (0:00:00.067) 0:01:42.324 ******* 2025-04-09 10:17:43.771672 | orchestrator | 2025-04-09 10:17:43.771686 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-09 10:17:43.771700 | orchestrator | Wednesday 09 April 2025 10:16:48 +0000 (0:00:00.267) 0:01:42.592 ******* 2025-04-09 10:17:43.771714 | orchestrator | 2025-04-09 10:17:43.771728 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-09 10:17:43.771741 | orchestrator | Wednesday 09 April 2025 10:16:49 +0000 (0:00:00.065) 0:01:42.657 ******* 2025-04-09 10:17:43.771784 | orchestrator | 2025-04-09 10:17:43.771805 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-09 10:17:43.771820 | orchestrator | Wednesday 09 April 2025 10:16:49 +0000 (0:00:00.072) 0:01:42.730 ******* 2025-04-09 10:17:43.771834 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.771848 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:17:43.771862 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.771876 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.771890 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.771904 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.771918 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.771932 | orchestrator | 2025-04-09 10:17:43.771946 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-09 10:17:43.771960 | orchestrator | Wednesday 09 April 2025 10:16:58 +0000 (0:00:09.428) 0:01:52.158 ******* 2025-04-09 10:17:43.771974 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:17:43.771988 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.772002 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.772016 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.772029 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.772043 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.772057 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.772071 | orchestrator | 2025-04-09 10:17:43.772090 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-09 10:17:43.772104 | orchestrator | Wednesday 09 April 2025 10:17:29 +0000 (0:00:31.245) 0:02:23.404 ******* 2025-04-09 10:17:43.772118 | orchestrator | ok: [testbed-manager] 2025-04-09 10:17:43.772132 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:17:43.772146 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:17:43.772160 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:17:43.772174 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:17:43.772188 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:17:43.772202 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:17:43.772215 | orchestrator | 2025-04-09 10:17:43.772230 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-09 10:17:43.772261 | orchestrator | Wednesday 09 April 2025 10:17:32 +0000 (0:00:02.479) 0:02:25.883 ******* 2025-04-09 10:17:43.772276 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:17:43.772291 | orchestrator | changed: [testbed-manager] 2025-04-09 10:17:43.772304 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:17:43.772318 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:17:43.772332 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:17:43.772347 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:17:43.772360 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:17:43.772375 | orchestrator | 2025-04-09 10:17:43.772389 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:17:43.772403 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 10:17:43.772418 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 10:17:43.772432 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 10:17:43.772446 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 10:17:43.772460 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 10:17:43.772474 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 10:17:43.772495 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-09 10:17:43.772509 | orchestrator | 2025-04-09 10:17:43.772523 | orchestrator | 2025-04-09 10:17:43.772536 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:17:43.772550 | orchestrator | Wednesday 09 April 2025 10:17:41 +0000 (0:00:09.323) 0:02:35.207 ******* 2025-04-09 10:17:43.772564 | orchestrator | =============================================================================== 2025-04-09 10:17:43.772578 | orchestrator | common : Ensure fluentd image is present for label check --------------- 42.27s 2025-04-09 10:17:43.772592 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.25s 2025-04-09 10:17:43.772606 | orchestrator | common : Restart fluentd container -------------------------------------- 9.43s 2025-04-09 10:17:43.772620 | orchestrator | common : Restart cron container ----------------------------------------- 9.32s 2025-04-09 10:17:43.772634 | orchestrator | common : Copying over config.json files for services -------------------- 5.25s 2025-04-09 10:17:43.772648 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.78s 2025-04-09 10:17:43.772662 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.74s 2025-04-09 10:17:43.772675 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.09s 2025-04-09 10:17:43.772689 | orchestrator | common : Check common containers ---------------------------------------- 3.82s 2025-04-09 10:17:43.772703 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.00s 2025-04-09 10:17:43.772717 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.56s 2025-04-09 10:17:43.772731 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.48s 2025-04-09 10:17:43.772749 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.48s 2025-04-09 10:17:46.821645 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.36s 2025-04-09 10:17:46.821761 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.28s 2025-04-09 10:17:46.821780 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.27s 2025-04-09 10:17:46.821795 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.19s 2025-04-09 10:17:46.821809 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.12s 2025-04-09 10:17:46.821824 | orchestrator | common : Set fluentd facts ---------------------------------------------- 2.05s 2025-04-09 10:17:46.821838 | orchestrator | common : Creating log volume -------------------------------------------- 2.04s 2025-04-09 10:17:46.821870 | orchestrator | 2025-04-09 10:17:46 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:46.822540 | orchestrator | 2025-04-09 10:17:46 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:17:46.823924 | orchestrator | 2025-04-09 10:17:46 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:17:46.826300 | orchestrator | 2025-04-09 10:17:46 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:46.827052 | orchestrator | 2025-04-09 10:17:46 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:17:46.827886 | orchestrator | 2025-04-09 10:17:46 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:17:46.828491 | orchestrator | 2025-04-09 10:17:46 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:49.877869 | orchestrator | 2025-04-09 10:17:49 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:49.883895 | orchestrator | 2025-04-09 10:17:49 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:17:49.883982 | orchestrator | 2025-04-09 10:17:49 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:17:49.884010 | orchestrator | 2025-04-09 10:17:49 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:52.956632 | orchestrator | 2025-04-09 10:17:49 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:17:52.956746 | orchestrator | 2025-04-09 10:17:49 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:17:52.956765 | orchestrator | 2025-04-09 10:17:49 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:52.956798 | orchestrator | 2025-04-09 10:17:52 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:52.961198 | orchestrator | 2025-04-09 10:17:52 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:17:52.961228 | orchestrator | 2025-04-09 10:17:52 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:17:52.961267 | orchestrator | 2025-04-09 10:17:52 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:52.961284 | orchestrator | 2025-04-09 10:17:52 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:17:52.961324 | orchestrator | 2025-04-09 10:17:52 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:17:56.030988 | orchestrator | 2025-04-09 10:17:52 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:56.031129 | orchestrator | 2025-04-09 10:17:56 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:56.033605 | orchestrator | 2025-04-09 10:17:56 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:17:56.036857 | orchestrator | 2025-04-09 10:17:56 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:17:56.039015 | orchestrator | 2025-04-09 10:17:56 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:56.039855 | orchestrator | 2025-04-09 10:17:56 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:17:56.041366 | orchestrator | 2025-04-09 10:17:56 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:17:59.102013 | orchestrator | 2025-04-09 10:17:56 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:17:59.102201 | orchestrator | 2025-04-09 10:17:59 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:17:59.104136 | orchestrator | 2025-04-09 10:17:59 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:17:59.105887 | orchestrator | 2025-04-09 10:17:59 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:17:59.106467 | orchestrator | 2025-04-09 10:17:59 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:17:59.106501 | orchestrator | 2025-04-09 10:17:59 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:17:59.107276 | orchestrator | 2025-04-09 10:17:59 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:02.164881 | orchestrator | 2025-04-09 10:17:59 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:02.165014 | orchestrator | 2025-04-09 10:18:02 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:02.166675 | orchestrator | 2025-04-09 10:18:02 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:02.166816 | orchestrator | 2025-04-09 10:18:02 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:18:02.167940 | orchestrator | 2025-04-09 10:18:02 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:02.171608 | orchestrator | 2025-04-09 10:18:02 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:18:02.172580 | orchestrator | 2025-04-09 10:18:02 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:05.229687 | orchestrator | 2025-04-09 10:18:02 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:05.229808 | orchestrator | 2025-04-09 10:18:05 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:05.232433 | orchestrator | 2025-04-09 10:18:05 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:05.232466 | orchestrator | 2025-04-09 10:18:05 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:18:05.234344 | orchestrator | 2025-04-09 10:18:05 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:05.239782 | orchestrator | 2025-04-09 10:18:05 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state STARTED 2025-04-09 10:18:08.313228 | orchestrator | 2025-04-09 10:18:05 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:08.313389 | orchestrator | 2025-04-09 10:18:05 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:08.313425 | orchestrator | 2025-04-09 10:18:08 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:08.314505 | orchestrator | 2025-04-09 10:18:08 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:08.317870 | orchestrator | 2025-04-09 10:18:08 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:08.319449 | orchestrator | 2025-04-09 10:18:08 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:18:08.321989 | orchestrator | 2025-04-09 10:18:08 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:08.325818 | orchestrator | 2025-04-09 10:18:08 | INFO  | Task 4c0d271d-1a17-4f9b-b9a2-aaec1a0923ae is in state SUCCESS 2025-04-09 10:18:08.329190 | orchestrator | 2025-04-09 10:18:08 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:11.394887 | orchestrator | 2025-04-09 10:18:08 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:11.395079 | orchestrator | 2025-04-09 10:18:11 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:11.395167 | orchestrator | 2025-04-09 10:18:11 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:11.396056 | orchestrator | 2025-04-09 10:18:11 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:11.397319 | orchestrator | 2025-04-09 10:18:11 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:18:11.405468 | orchestrator | 2025-04-09 10:18:11 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:14.473367 | orchestrator | 2025-04-09 10:18:11 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:14.473530 | orchestrator | 2025-04-09 10:18:11 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:14.473569 | orchestrator | 2025-04-09 10:18:14 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:14.473682 | orchestrator | 2025-04-09 10:18:14 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:14.474443 | orchestrator | 2025-04-09 10:18:14 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:14.475314 | orchestrator | 2025-04-09 10:18:14 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:18:14.476079 | orchestrator | 2025-04-09 10:18:14 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:14.476845 | orchestrator | 2025-04-09 10:18:14 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:17.523678 | orchestrator | 2025-04-09 10:18:14 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:17.523793 | orchestrator | 2025-04-09 10:18:17 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:17.524095 | orchestrator | 2025-04-09 10:18:17 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:17.528055 | orchestrator | 2025-04-09 10:18:17 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:17.531351 | orchestrator | 2025-04-09 10:18:17 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:18:17.532129 | orchestrator | 2025-04-09 10:18:17 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:17.533795 | orchestrator | 2025-04-09 10:18:17 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:20.585471 | orchestrator | 2025-04-09 10:18:17 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:20.585593 | orchestrator | 2025-04-09 10:18:20 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:20.587198 | orchestrator | 2025-04-09 10:18:20 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:20.587229 | orchestrator | 2025-04-09 10:18:20 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:20.587785 | orchestrator | 2025-04-09 10:18:20 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state STARTED 2025-04-09 10:18:20.589245 | orchestrator | 2025-04-09 10:18:20 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:20.590142 | orchestrator | 2025-04-09 10:18:20 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:23.635603 | orchestrator | 2025-04-09 10:18:20 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:23.635727 | orchestrator | 2025-04-09 10:18:23 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:23.640742 | orchestrator | 2025-04-09 10:18:23 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:23.642109 | orchestrator | 2025-04-09 10:18:23 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:23.642392 | orchestrator | 2025-04-09 10:18:23 | INFO  | Task 90245d5f-61d1-4d9d-8147-1fddd6c102b1 is in state SUCCESS 2025-04-09 10:18:23.643777 | orchestrator | 2025-04-09 10:18:23.643812 | orchestrator | 2025-04-09 10:18:23.643827 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 10:18:23.643842 | orchestrator | 2025-04-09 10:18:23.643857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-09 10:18:23.643871 | orchestrator | Wednesday 09 April 2025 10:17:48 +0000 (0:00:00.263) 0:00:00.263 ******* 2025-04-09 10:18:23.643886 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:18:23.643902 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:18:23.643916 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:18:23.643953 | orchestrator | 2025-04-09 10:18:23.643968 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-09 10:18:23.643991 | orchestrator | Wednesday 09 April 2025 10:17:48 +0000 (0:00:00.608) 0:00:00.872 ******* 2025-04-09 10:18:23.644006 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-09 10:18:23.644021 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-09 10:18:23.644035 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-09 10:18:23.644049 | orchestrator | 2025-04-09 10:18:23.644064 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-09 10:18:23.644077 | orchestrator | 2025-04-09 10:18:23.644091 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-09 10:18:23.644105 | orchestrator | Wednesday 09 April 2025 10:17:49 +0000 (0:00:00.899) 0:00:01.771 ******* 2025-04-09 10:18:23.644120 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:18:23.644135 | orchestrator | 2025-04-09 10:18:23.644149 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-09 10:18:23.644163 | orchestrator | Wednesday 09 April 2025 10:17:51 +0000 (0:00:01.496) 0:00:03.268 ******* 2025-04-09 10:18:23.644177 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-09 10:18:23.644191 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-09 10:18:23.644205 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-09 10:18:23.644219 | orchestrator | 2025-04-09 10:18:23.644233 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-09 10:18:23.644246 | orchestrator | Wednesday 09 April 2025 10:17:52 +0000 (0:00:01.541) 0:00:04.809 ******* 2025-04-09 10:18:23.644291 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-09 10:18:23.644306 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-09 10:18:23.644320 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-09 10:18:23.644334 | orchestrator | 2025-04-09 10:18:23.644348 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-09 10:18:23.644362 | orchestrator | Wednesday 09 April 2025 10:17:57 +0000 (0:00:04.778) 0:00:09.587 ******* 2025-04-09 10:18:23.644376 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:18:23.644396 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:18:23.644413 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:18:23.644428 | orchestrator | 2025-04-09 10:18:23.644444 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-09 10:18:23.644460 | orchestrator | Wednesday 09 April 2025 10:18:01 +0000 (0:00:04.434) 0:00:14.021 ******* 2025-04-09 10:18:23.644476 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:18:23.644492 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:18:23.644508 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:18:23.644524 | orchestrator | 2025-04-09 10:18:23.644539 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:18:23.644556 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:18:23.644574 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:18:23.644591 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:18:23.644607 | orchestrator | 2025-04-09 10:18:23.644623 | orchestrator | 2025-04-09 10:18:23.644638 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:18:23.644655 | orchestrator | Wednesday 09 April 2025 10:18:05 +0000 (0:00:03.480) 0:00:17.502 ******* 2025-04-09 10:18:23.644670 | orchestrator | =============================================================================== 2025-04-09 10:18:23.644686 | orchestrator | memcached : Copying over config.json files for services ----------------- 4.78s 2025-04-09 10:18:23.644710 | orchestrator | memcached : Check memcached container ----------------------------------- 4.43s 2025-04-09 10:18:23.644726 | orchestrator | memcached : Restart memcached container --------------------------------- 3.48s 2025-04-09 10:18:23.644742 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.54s 2025-04-09 10:18:23.644759 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.50s 2025-04-09 10:18:23.644773 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-04-09 10:18:23.644787 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2025-04-09 10:18:23.644801 | orchestrator | 2025-04-09 10:18:23.644815 | orchestrator | 2025-04-09 10:18:23.644829 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 10:18:23.644843 | orchestrator | 2025-04-09 10:18:23.644857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-09 10:18:23.644871 | orchestrator | Wednesday 09 April 2025 10:17:47 +0000 (0:00:00.217) 0:00:00.217 ******* 2025-04-09 10:18:23.644885 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:18:23.644900 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:18:23.644914 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:18:23.644929 | orchestrator | 2025-04-09 10:18:23.644943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-09 10:18:23.644967 | orchestrator | Wednesday 09 April 2025 10:17:48 +0000 (0:00:00.684) 0:00:00.902 ******* 2025-04-09 10:18:23.644982 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-09 10:18:23.644996 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-09 10:18:23.645010 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-09 10:18:23.645024 | orchestrator | 2025-04-09 10:18:23.645038 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-09 10:18:23.645052 | orchestrator | 2025-04-09 10:18:23.645067 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-09 10:18:23.645081 | orchestrator | Wednesday 09 April 2025 10:17:49 +0000 (0:00:00.763) 0:00:01.665 ******* 2025-04-09 10:18:23.645095 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:18:23.645109 | orchestrator | 2025-04-09 10:18:23.645122 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-09 10:18:23.645143 | orchestrator | Wednesday 09 April 2025 10:17:50 +0000 (0:00:01.333) 0:00:02.999 ******* 2025-04-09 10:18:23.645159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645292 | orchestrator | 2025-04-09 10:18:23.645307 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-09 10:18:23.645321 | orchestrator | Wednesday 09 April 2025 10:17:54 +0000 (0:00:03.788) 0:00:06.787 ******* 2025-04-09 10:18:23.645336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645439 | orchestrator | 2025-04-09 10:18:23.645453 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-09 10:18:23.645467 | orchestrator | Wednesday 09 April 2025 10:17:58 +0000 (0:00:04.498) 0:00:11.285 ******* 2025-04-09 10:18:23.645482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645578 | orchestrator | 2025-04-09 10:18:23.645598 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-09 10:18:23.645612 | orchestrator | Wednesday 09 April 2025 10:18:03 +0000 (0:00:04.709) 0:00:15.995 ******* 2025-04-09 10:18:23.645627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-09 10:18:23.645722 | orchestrator | 2025-04-09 10:18:23.645736 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-09 10:18:23.645750 | orchestrator | Wednesday 09 April 2025 10:18:05 +0000 (0:00:02.343) 0:00:18.338 ******* 2025-04-09 10:18:23.645764 | orchestrator | 2025-04-09 10:18:23.645778 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-09 10:18:23.645798 | orchestrator | Wednesday 09 April 2025 10:18:05 +0000 (0:00:00.091) 0:00:18.430 ******* 2025-04-09 10:18:23.646544 | orchestrator | 2025-04-09 10:18:23.646571 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-09 10:18:23.646584 | orchestrator | Wednesday 09 April 2025 10:18:06 +0000 (0:00:00.127) 0:00:18.558 ******* 2025-04-09 10:18:23.646597 | orchestrator | 2025-04-09 10:18:23.646609 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-09 10:18:23.646622 | orchestrator | Wednesday 09 April 2025 10:18:07 +0000 (0:00:01.086) 0:00:19.644 ******* 2025-04-09 10:18:23.646635 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:18:23.646648 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:18:23.646660 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:18:23.646672 | orchestrator | 2025-04-09 10:18:23.646685 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-09 10:18:23.646697 | orchestrator | Wednesday 09 April 2025 10:18:11 +0000 (0:00:04.224) 0:00:23.869 ******* 2025-04-09 10:18:23.646710 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:18:23.646732 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:18:23.646745 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:18:23.646757 | orchestrator | 2025-04-09 10:18:23.646770 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:18:23.646782 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:18:23.646795 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:18:23.646808 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:18:23.646821 | orchestrator | 2025-04-09 10:18:23.646833 | orchestrator | 2025-04-09 10:18:23.646846 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:18:23.646858 | orchestrator | Wednesday 09 April 2025 10:18:19 +0000 (0:00:08.326) 0:00:32.196 ******* 2025-04-09 10:18:23.646870 | orchestrator | =============================================================================== 2025-04-09 10:18:23.646883 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.33s 2025-04-09 10:18:23.646895 | orchestrator | redis : Copying over redis config files --------------------------------- 4.71s 2025-04-09 10:18:23.646907 | orchestrator | redis : Copying over default config.json files -------------------------- 4.50s 2025-04-09 10:18:23.646920 | orchestrator | redis : Restart redis container ----------------------------------------- 4.22s 2025-04-09 10:18:23.646932 | orchestrator | redis : Ensuring config directories exist ------------------------------- 3.79s 2025-04-09 10:18:23.646945 | orchestrator | redis : Check redis containers ------------------------------------------ 2.34s 2025-04-09 10:18:23.646957 | orchestrator | redis : include_tasks --------------------------------------------------- 1.33s 2025-04-09 10:18:23.646969 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.31s 2025-04-09 10:18:23.646982 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2025-04-09 10:18:23.646994 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2025-04-09 10:18:23.647007 | orchestrator | 2025-04-09 10:18:23 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:23.647031 | orchestrator | 2025-04-09 10:18:23 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:26.711324 | orchestrator | 2025-04-09 10:18:23 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:26.711454 | orchestrator | 2025-04-09 10:18:26 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:26.714362 | orchestrator | 2025-04-09 10:18:26 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:26.716730 | orchestrator | 2025-04-09 10:18:26 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:26.718451 | orchestrator | 2025-04-09 10:18:26 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:26.719760 | orchestrator | 2025-04-09 10:18:26 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:29.761313 | orchestrator | 2025-04-09 10:18:26 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:29.761472 | orchestrator | 2025-04-09 10:18:29 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:29.762462 | orchestrator | 2025-04-09 10:18:29 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:29.765090 | orchestrator | 2025-04-09 10:18:29 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:29.769351 | orchestrator | 2025-04-09 10:18:29 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:29.771773 | orchestrator | 2025-04-09 10:18:29 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:29.772243 | orchestrator | 2025-04-09 10:18:29 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:32.835805 | orchestrator | 2025-04-09 10:18:32 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:35.893656 | orchestrator | 2025-04-09 10:18:32 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:35.893762 | orchestrator | 2025-04-09 10:18:32 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:35.893792 | orchestrator | 2025-04-09 10:18:32 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:35.893805 | orchestrator | 2025-04-09 10:18:32 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:35.893817 | orchestrator | 2025-04-09 10:18:32 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:35.893845 | orchestrator | 2025-04-09 10:18:35 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:35.895474 | orchestrator | 2025-04-09 10:18:35 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:35.895505 | orchestrator | 2025-04-09 10:18:35 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:35.896746 | orchestrator | 2025-04-09 10:18:35 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:35.899041 | orchestrator | 2025-04-09 10:18:35 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:38.973394 | orchestrator | 2025-04-09 10:18:35 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:38.973525 | orchestrator | 2025-04-09 10:18:38 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:38.974191 | orchestrator | 2025-04-09 10:18:38 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:38.975245 | orchestrator | 2025-04-09 10:18:38 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:38.976629 | orchestrator | 2025-04-09 10:18:38 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:38.977452 | orchestrator | 2025-04-09 10:18:38 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:38.977920 | orchestrator | 2025-04-09 10:18:38 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:42.049154 | orchestrator | 2025-04-09 10:18:42 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:42.055196 | orchestrator | 2025-04-09 10:18:42 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:42.063981 | orchestrator | 2025-04-09 10:18:42 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:42.064017 | orchestrator | 2025-04-09 10:18:42 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:45.148947 | orchestrator | 2025-04-09 10:18:42 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:45.149074 | orchestrator | 2025-04-09 10:18:42 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:45.149111 | orchestrator | 2025-04-09 10:18:45 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:45.149736 | orchestrator | 2025-04-09 10:18:45 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:45.155313 | orchestrator | 2025-04-09 10:18:45 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:45.157971 | orchestrator | 2025-04-09 10:18:45 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:45.159521 | orchestrator | 2025-04-09 10:18:45 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:45.159793 | orchestrator | 2025-04-09 10:18:45 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:48.209374 | orchestrator | 2025-04-09 10:18:48 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:48.210062 | orchestrator | 2025-04-09 10:18:48 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:48.210111 | orchestrator | 2025-04-09 10:18:48 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:48.210792 | orchestrator | 2025-04-09 10:18:48 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:48.211570 | orchestrator | 2025-04-09 10:18:48 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:51.264521 | orchestrator | 2025-04-09 10:18:48 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:51.264678 | orchestrator | 2025-04-09 10:18:51 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:51.265100 | orchestrator | 2025-04-09 10:18:51 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:51.266469 | orchestrator | 2025-04-09 10:18:51 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:51.268018 | orchestrator | 2025-04-09 10:18:51 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:51.269315 | orchestrator | 2025-04-09 10:18:51 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:54.370918 | orchestrator | 2025-04-09 10:18:51 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:54.371047 | orchestrator | 2025-04-09 10:18:54 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:54.373610 | orchestrator | 2025-04-09 10:18:54 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:54.376043 | orchestrator | 2025-04-09 10:18:54 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:54.380437 | orchestrator | 2025-04-09 10:18:54 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:54.384061 | orchestrator | 2025-04-09 10:18:54 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:57.435373 | orchestrator | 2025-04-09 10:18:54 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:18:57.435525 | orchestrator | 2025-04-09 10:18:57 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:18:57.437364 | orchestrator | 2025-04-09 10:18:57 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:18:57.439789 | orchestrator | 2025-04-09 10:18:57 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:18:57.442011 | orchestrator | 2025-04-09 10:18:57 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:18:57.443506 | orchestrator | 2025-04-09 10:18:57 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:18:57.443734 | orchestrator | 2025-04-09 10:18:57 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:00.486244 | orchestrator | 2025-04-09 10:19:00 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:00.488049 | orchestrator | 2025-04-09 10:19:00 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:00.489630 | orchestrator | 2025-04-09 10:19:00 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:00.491381 | orchestrator | 2025-04-09 10:19:00 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:00.492877 | orchestrator | 2025-04-09 10:19:00 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:03.546951 | orchestrator | 2025-04-09 10:19:00 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:03.547083 | orchestrator | 2025-04-09 10:19:03 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:03.547903 | orchestrator | 2025-04-09 10:19:03 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:03.548416 | orchestrator | 2025-04-09 10:19:03 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:03.549490 | orchestrator | 2025-04-09 10:19:03 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:03.550359 | orchestrator | 2025-04-09 10:19:03 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:03.550696 | orchestrator | 2025-04-09 10:19:03 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:06.608427 | orchestrator | 2025-04-09 10:19:06 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:06.612011 | orchestrator | 2025-04-09 10:19:06 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:06.612510 | orchestrator | 2025-04-09 10:19:06 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:06.614326 | orchestrator | 2025-04-09 10:19:06 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:06.615159 | orchestrator | 2025-04-09 10:19:06 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:06.615306 | orchestrator | 2025-04-09 10:19:06 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:09.667759 | orchestrator | 2025-04-09 10:19:09 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:09.670870 | orchestrator | 2025-04-09 10:19:09 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:09.671511 | orchestrator | 2025-04-09 10:19:09 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:09.672345 | orchestrator | 2025-04-09 10:19:09 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:09.672823 | orchestrator | 2025-04-09 10:19:09 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:09.672920 | orchestrator | 2025-04-09 10:19:09 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:12.736574 | orchestrator | 2025-04-09 10:19:12 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:12.736907 | orchestrator | 2025-04-09 10:19:12 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:12.737489 | orchestrator | 2025-04-09 10:19:12 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:12.738132 | orchestrator | 2025-04-09 10:19:12 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:12.739321 | orchestrator | 2025-04-09 10:19:12 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:15.773885 | orchestrator | 2025-04-09 10:19:12 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:15.774109 | orchestrator | 2025-04-09 10:19:15 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:15.774664 | orchestrator | 2025-04-09 10:19:15 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:15.774701 | orchestrator | 2025-04-09 10:19:15 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:15.775366 | orchestrator | 2025-04-09 10:19:15 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:15.776020 | orchestrator | 2025-04-09 10:19:15 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:18.826991 | orchestrator | 2025-04-09 10:19:15 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:18.827144 | orchestrator | 2025-04-09 10:19:18 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:18.834115 | orchestrator | 2025-04-09 10:19:18 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:18.835312 | orchestrator | 2025-04-09 10:19:18 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:18.835342 | orchestrator | 2025-04-09 10:19:18 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:18.836489 | orchestrator | 2025-04-09 10:19:18 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:18.836860 | orchestrator | 2025-04-09 10:19:18 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:21.897022 | orchestrator | 2025-04-09 10:19:21 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:21.902967 | orchestrator | 2025-04-09 10:19:21 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:21.906775 | orchestrator | 2025-04-09 10:19:21 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:21.910413 | orchestrator | 2025-04-09 10:19:21 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:21.914457 | orchestrator | 2025-04-09 10:19:21 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state STARTED 2025-04-09 10:19:24.961352 | orchestrator | 2025-04-09 10:19:21 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:24.961501 | orchestrator | 2025-04-09 10:19:24 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:24.964447 | orchestrator | 2025-04-09 10:19:24 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:24.964897 | orchestrator | 2025-04-09 10:19:24 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:24.965915 | orchestrator | 2025-04-09 10:19:24 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:24.971830 | orchestrator | 2025-04-09 10:19:24.971870 | orchestrator | 2025-04-09 10:19:24.971885 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 10:19:24.971901 | orchestrator | 2025-04-09 10:19:24.971915 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-09 10:19:24.971930 | orchestrator | Wednesday 09 April 2025 10:17:47 +0000 (0:00:00.480) 0:00:00.480 ******* 2025-04-09 10:19:24.971944 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:24.971983 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:24.971997 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:24.972011 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:24.972047 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:24.972062 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:24.972076 | orchestrator | 2025-04-09 10:19:24.972092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-09 10:19:24.972108 | orchestrator | Wednesday 09 April 2025 10:17:49 +0000 (0:00:01.433) 0:00:01.913 ******* 2025-04-09 10:19:24.972122 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-09 10:19:24.972138 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-09 10:19:24.972153 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-09 10:19:24.972167 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-09 10:19:24.972182 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-09 10:19:24.972197 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-09 10:19:24.972211 | orchestrator | 2025-04-09 10:19:24.972226 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-09 10:19:24.972241 | orchestrator | 2025-04-09 10:19:24.972256 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-09 10:19:24.972296 | orchestrator | Wednesday 09 April 2025 10:17:51 +0000 (0:00:01.808) 0:00:03.721 ******* 2025-04-09 10:19:24.972312 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:19:24.972328 | orchestrator | 2025-04-09 10:19:24.972343 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-09 10:19:24.972357 | orchestrator | Wednesday 09 April 2025 10:17:55 +0000 (0:00:04.533) 0:00:08.255 ******* 2025-04-09 10:19:24.972371 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-09 10:19:24.972386 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-09 10:19:24.972400 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-09 10:19:24.972414 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-09 10:19:24.972430 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-09 10:19:24.972446 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-09 10:19:24.972462 | orchestrator | 2025-04-09 10:19:24.972478 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-09 10:19:24.972493 | orchestrator | Wednesday 09 April 2025 10:17:58 +0000 (0:00:03.132) 0:00:11.388 ******* 2025-04-09 10:19:24.972509 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-09 10:19:24.972526 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-09 10:19:24.972541 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-09 10:19:24.972557 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-09 10:19:24.972573 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-09 10:19:24.972589 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-09 10:19:24.972604 | orchestrator | 2025-04-09 10:19:24.972620 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-09 10:19:24.972636 | orchestrator | Wednesday 09 April 2025 10:18:02 +0000 (0:00:03.871) 0:00:15.260 ******* 2025-04-09 10:19:24.972652 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-09 10:19:24.972667 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:24.972684 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-09 10:19:24.972700 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:24.972716 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-09 10:19:24.972731 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:24.972747 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-09 10:19:24.972764 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:24.972786 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-09 10:19:24.972801 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:24.972815 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-09 10:19:24.972829 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:24.972843 | orchestrator | 2025-04-09 10:19:24.972857 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-09 10:19:24.972871 | orchestrator | Wednesday 09 April 2025 10:18:04 +0000 (0:00:02.225) 0:00:17.486 ******* 2025-04-09 10:19:24.972885 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:24.972899 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:24.972913 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:24.972927 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:24.972941 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:24.972955 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:24.972969 | orchestrator | 2025-04-09 10:19:24.972983 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-09 10:19:24.972997 | orchestrator | Wednesday 09 April 2025 10:18:05 +0000 (0:00:00.918) 0:00:18.404 ******* 2025-04-09 10:19:24.973025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973248 | orchestrator | 2025-04-09 10:19:24.973285 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-09 10:19:24.973300 | orchestrator | Wednesday 09 April 2025 10:18:09 +0000 (0:00:03.700) 0:00:22.104 ******* 2025-04-09 10:19:24.973315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.973583 | orchestrator | 2025-04-09 10:19:24.973597 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-09 10:19:24.973612 | orchestrator | Wednesday 09 April 2025 10:18:12 +0000 (0:00:03.362) 0:00:25.467 ******* 2025-04-09 10:19:24.973626 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:24.973640 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:24.973655 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:24.973669 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:24.973683 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:24.973697 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:24.973711 | orchestrator | 2025-04-09 10:19:24.973725 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-09 10:19:24.973739 | orchestrator | Wednesday 09 April 2025 10:18:15 +0000 (0:00:02.667) 0:00:28.134 ******* 2025-04-09 10:19:24.973753 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:24.973767 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:24.973781 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:24.973794 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:24.973809 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:24.973823 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:24.973837 | orchestrator | 2025-04-09 10:19:24.973851 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-09 10:19:24.973872 | orchestrator | Wednesday 09 April 2025 10:18:18 +0000 (0:00:02.652) 0:00:30.786 ******* 2025-04-09 10:19:24.973886 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:24.973900 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:24.973914 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:24.973928 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:24.973942 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:24.973956 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:24.973970 | orchestrator | 2025-04-09 10:19:24.973995 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-09 10:19:24.974009 | orchestrator | Wednesday 09 April 2025 10:18:20 +0000 (0:00:02.467) 0:00:33.254 ******* 2025-04-09 10:19:24.974078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974196 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-09 10:19:24.974341 | orchestrator | 2025-04-09 10:19:24.974356 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-09 10:19:24.974370 | orchestrator | Wednesday 09 April 2025 10:18:24 +0000 (0:00:04.123) 0:00:37.377 ******* 2025-04-09 10:19:24.974384 | orchestrator | 2025-04-09 10:19:24.974399 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-09 10:19:24.974413 | orchestrator | Wednesday 09 April 2025 10:18:25 +0000 (0:00:00.354) 0:00:37.732 ******* 2025-04-09 10:19:24.974427 | orchestrator | 2025-04-09 10:19:24.974441 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-09 10:19:24.974455 | orchestrator | Wednesday 09 April 2025 10:18:25 +0000 (0:00:00.734) 0:00:38.466 ******* 2025-04-09 10:19:24.974469 | orchestrator | 2025-04-09 10:19:24.974483 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-09 10:19:24.974497 | orchestrator | Wednesday 09 April 2025 10:18:26 +0000 (0:00:00.124) 0:00:38.591 ******* 2025-04-09 10:19:24.974512 | orchestrator | 2025-04-09 10:19:24.974526 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-09 10:19:24.974540 | orchestrator | Wednesday 09 April 2025 10:18:26 +0000 (0:00:00.423) 0:00:39.014 ******* 2025-04-09 10:19:24.974554 | orchestrator | 2025-04-09 10:19:24.974568 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-09 10:19:24.974582 | orchestrator | Wednesday 09 April 2025 10:18:26 +0000 (0:00:00.116) 0:00:39.130 ******* 2025-04-09 10:19:24.974596 | orchestrator | 2025-04-09 10:19:24.974610 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-09 10:19:24.974624 | orchestrator | Wednesday 09 April 2025 10:18:26 +0000 (0:00:00.425) 0:00:39.556 ******* 2025-04-09 10:19:24.974638 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:24.974652 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:24.974666 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:24.974681 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:24.974695 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:24.974709 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:24.974722 | orchestrator | 2025-04-09 10:19:24.974736 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-09 10:19:24.974751 | orchestrator | Wednesday 09 April 2025 10:18:38 +0000 (0:00:11.743) 0:00:51.299 ******* 2025-04-09 10:19:24.974765 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:24.974779 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:24.974793 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:24.974807 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:24.974821 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:24.974835 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:24.974855 | orchestrator | 2025-04-09 10:19:24.974869 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-09 10:19:24.974883 | orchestrator | Wednesday 09 April 2025 10:18:41 +0000 (0:00:03.097) 0:00:54.396 ******* 2025-04-09 10:19:24.974897 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:24.974912 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:24.974926 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:24.974940 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:24.974954 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:24.974977 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:24.974993 | orchestrator | 2025-04-09 10:19:24.975014 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-09 10:19:24.975030 | orchestrator | Wednesday 09 April 2025 10:18:51 +0000 (0:00:09.686) 0:01:04.083 ******* 2025-04-09 10:19:24.975044 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-09 10:19:24.975059 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-09 10:19:24.975073 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-09 10:19:24.975087 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-09 10:19:24.975102 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-09 10:19:24.975116 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-09 10:19:24.975129 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-09 10:19:24.975144 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-09 10:19:24.975158 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-09 10:19:24.975172 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-09 10:19:24.975186 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-09 10:19:24.975201 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-09 10:19:24.975215 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-09 10:19:24.975229 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-09 10:19:24.975243 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-09 10:19:24.975257 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-09 10:19:24.975294 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-09 10:19:24.975310 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-09 10:19:24.975324 | orchestrator | 2025-04-09 10:19:24.975338 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-09 10:19:24.975353 | orchestrator | Wednesday 09 April 2025 10:19:00 +0000 (0:00:08.691) 0:01:12.775 ******* 2025-04-09 10:19:24.975367 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-09 10:19:24.975382 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:24.975396 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-09 10:19:24.975410 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:24.975435 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-09 10:19:24.975450 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:24.975464 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-09 10:19:24.975478 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-09 10:19:24.975492 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-09 10:19:24.975506 | orchestrator | 2025-04-09 10:19:24.975520 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-09 10:19:24.975534 | orchestrator | Wednesday 09 April 2025 10:19:03 +0000 (0:00:03.201) 0:01:15.976 ******* 2025-04-09 10:19:24.975548 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-09 10:19:24.975562 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:24.975577 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-09 10:19:24.975591 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:24.975605 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-09 10:19:24.975619 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:24.975633 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-09 10:19:24.975647 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-09 10:19:24.975661 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-09 10:19:24.975675 | orchestrator | 2025-04-09 10:19:24.975689 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-09 10:19:24.975703 | orchestrator | Wednesday 09 April 2025 10:19:08 +0000 (0:00:05.327) 0:01:21.304 ******* 2025-04-09 10:19:24.975717 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:24.975732 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:24.975746 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:24.975759 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:24.975774 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:24.975787 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:24.975801 | orchestrator | 2025-04-09 10:19:24.975816 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:19:24.975836 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-09 10:19:28.039412 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-09 10:19:28.039520 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-09 10:19:28.039539 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 10:19:28.039554 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 10:19:28.039587 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-09 10:19:28.039602 | orchestrator | 2025-04-09 10:19:28.039617 | orchestrator | 2025-04-09 10:19:28.039632 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:19:28.039647 | orchestrator | Wednesday 09 April 2025 10:19:20 +0000 (0:00:12.094) 0:01:33.399 ******* 2025-04-09 10:19:28.039662 | orchestrator | =============================================================================== 2025-04-09 10:19:28.039681 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.78s 2025-04-09 10:19:28.039695 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.74s 2025-04-09 10:19:28.039709 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.69s 2025-04-09 10:19:28.039748 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 5.33s 2025-04-09 10:19:28.039762 | orchestrator | openvswitch : include_tasks --------------------------------------------- 4.53s 2025-04-09 10:19:28.039827 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.12s 2025-04-09 10:19:28.039843 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.87s 2025-04-09 10:19:28.039857 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.70s 2025-04-09 10:19:28.039871 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.36s 2025-04-09 10:19:28.039885 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.20s 2025-04-09 10:19:28.039899 | orchestrator | module-load : Load modules ---------------------------------------------- 3.13s 2025-04-09 10:19:28.039913 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.09s 2025-04-09 10:19:28.039929 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.67s 2025-04-09 10:19:28.039946 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.65s 2025-04-09 10:19:28.039962 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.47s 2025-04-09 10:19:28.039978 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.23s 2025-04-09 10:19:28.039993 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.18s 2025-04-09 10:19:28.040009 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.81s 2025-04-09 10:19:28.040025 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.43s 2025-04-09 10:19:28.040040 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.92s 2025-04-09 10:19:28.040055 | orchestrator | 2025-04-09 10:19:24 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:28.040072 | orchestrator | 2025-04-09 10:19:24 | INFO  | Task 3be45949-4eef-421f-bc79-4bd7e8d15cad is in state SUCCESS 2025-04-09 10:19:28.040088 | orchestrator | 2025-04-09 10:19:24 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:28.040121 | orchestrator | 2025-04-09 10:19:28 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:28.040210 | orchestrator | 2025-04-09 10:19:28 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:28.040235 | orchestrator | 2025-04-09 10:19:28 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:28.040879 | orchestrator | 2025-04-09 10:19:28 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:28.041950 | orchestrator | 2025-04-09 10:19:28 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:31.105706 | orchestrator | 2025-04-09 10:19:28 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:31.105838 | orchestrator | 2025-04-09 10:19:31 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:31.107777 | orchestrator | 2025-04-09 10:19:31 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:31.109466 | orchestrator | 2025-04-09 10:19:31 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:31.111043 | orchestrator | 2025-04-09 10:19:31 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:31.111678 | orchestrator | 2025-04-09 10:19:31 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:34.156386 | orchestrator | 2025-04-09 10:19:31 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:34.156519 | orchestrator | 2025-04-09 10:19:34 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:34.157103 | orchestrator | 2025-04-09 10:19:34 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:34.157139 | orchestrator | 2025-04-09 10:19:34 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:34.157511 | orchestrator | 2025-04-09 10:19:34 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:34.158304 | orchestrator | 2025-04-09 10:19:34 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:37.200024 | orchestrator | 2025-04-09 10:19:34 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:37.200115 | orchestrator | 2025-04-09 10:19:37 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:37.202230 | orchestrator | 2025-04-09 10:19:37 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:37.204397 | orchestrator | 2025-04-09 10:19:37 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:37.211367 | orchestrator | 2025-04-09 10:19:37 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:37.211379 | orchestrator | 2025-04-09 10:19:37 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:40.252097 | orchestrator | 2025-04-09 10:19:37 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:40.252232 | orchestrator | 2025-04-09 10:19:40 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:40.256997 | orchestrator | 2025-04-09 10:19:40 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:40.260087 | orchestrator | 2025-04-09 10:19:40 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:40.261423 | orchestrator | 2025-04-09 10:19:40 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:40.262522 | orchestrator | 2025-04-09 10:19:40 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state STARTED 2025-04-09 10:19:43.305005 | orchestrator | 2025-04-09 10:19:40 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:43.305142 | orchestrator | 2025-04-09 10:19:43 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:43.309564 | orchestrator | 2025-04-09 10:19:43 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:43.309617 | orchestrator | 2025-04-09 10:19:43 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:43.309642 | orchestrator | 2025-04-09 10:19:43.309657 | orchestrator | 2025-04-09 10:19:43.309671 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-04-09 10:19:43.309686 | orchestrator | 2025-04-09 10:19:43.309699 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-04-09 10:19:43.309713 | orchestrator | Wednesday 09 April 2025 10:15:08 +0000 (0:00:00.164) 0:00:00.164 ******* 2025-04-09 10:19:43.309727 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:43.309741 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:43.309755 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:43.309768 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.309781 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.309794 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.309807 | orchestrator | 2025-04-09 10:19:43.309820 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-04-09 10:19:43.309851 | orchestrator | Wednesday 09 April 2025 10:15:09 +0000 (0:00:00.824) 0:00:00.989 ******* 2025-04-09 10:19:43.309864 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.309898 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.309911 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.309924 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.309936 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.309948 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.309961 | orchestrator | 2025-04-09 10:19:43.309973 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-04-09 10:19:43.309986 | orchestrator | Wednesday 09 April 2025 10:15:09 +0000 (0:00:00.819) 0:00:01.809 ******* 2025-04-09 10:19:43.309998 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.310011 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.310081 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.310094 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.310107 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.310122 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.310136 | orchestrator | 2025-04-09 10:19:43.310150 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-04-09 10:19:43.310164 | orchestrator | Wednesday 09 April 2025 10:15:10 +0000 (0:00:00.881) 0:00:02.690 ******* 2025-04-09 10:19:43.310179 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:43.310192 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:43.310206 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.310220 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.310234 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.310248 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:43.310262 | orchestrator | 2025-04-09 10:19:43.310298 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-04-09 10:19:43.310313 | orchestrator | Wednesday 09 April 2025 10:15:13 +0000 (0:00:02.581) 0:00:05.272 ******* 2025-04-09 10:19:43.310327 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:43.310341 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:43.310355 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:43.310368 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.310382 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.310396 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.310409 | orchestrator | 2025-04-09 10:19:43.310423 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-04-09 10:19:43.310446 | orchestrator | Wednesday 09 April 2025 10:15:14 +0000 (0:00:01.320) 0:00:06.593 ******* 2025-04-09 10:19:43.310461 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:43.310475 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:43.310487 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:43.310500 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.310512 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.310525 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.310537 | orchestrator | 2025-04-09 10:19:43.310550 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-04-09 10:19:43.310562 | orchestrator | Wednesday 09 April 2025 10:15:16 +0000 (0:00:01.309) 0:00:07.902 ******* 2025-04-09 10:19:43.310575 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.310592 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.310604 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.310616 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.310629 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.310642 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.310654 | orchestrator | 2025-04-09 10:19:43.310666 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-04-09 10:19:43.310679 | orchestrator | Wednesday 09 April 2025 10:15:16 +0000 (0:00:00.840) 0:00:08.743 ******* 2025-04-09 10:19:43.310691 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.310704 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.310716 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.310729 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.310748 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.310760 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.310773 | orchestrator | 2025-04-09 10:19:43.310785 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-04-09 10:19:43.310798 | orchestrator | Wednesday 09 April 2025 10:15:17 +0000 (0:00:00.698) 0:00:09.441 ******* 2025-04-09 10:19:43.310810 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-09 10:19:43.310823 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-09 10:19:43.310835 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.310848 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-09 10:19:43.310860 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-09 10:19:43.310873 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.310886 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-09 10:19:43.310898 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-09 10:19:43.310910 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.310923 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-09 10:19:43.310945 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-09 10:19:43.310959 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.310972 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-09 10:19:43.310985 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-09 10:19:43.310997 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.311010 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-09 10:19:43.311022 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-09 10:19:43.311035 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.311047 | orchestrator | 2025-04-09 10:19:43.311060 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-04-09 10:19:43.311072 | orchestrator | Wednesday 09 April 2025 10:15:18 +0000 (0:00:00.881) 0:00:10.323 ******* 2025-04-09 10:19:43.311085 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.311097 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.311110 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.311122 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.311134 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.311146 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.311159 | orchestrator | 2025-04-09 10:19:43.311171 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-04-09 10:19:43.311185 | orchestrator | Wednesday 09 April 2025 10:15:19 +0000 (0:00:01.487) 0:00:11.810 ******* 2025-04-09 10:19:43.311198 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:43.311210 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:43.311223 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:43.311235 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.311248 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.311260 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.311286 | orchestrator | 2025-04-09 10:19:43.311300 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-04-09 10:19:43.311313 | orchestrator | Wednesday 09 April 2025 10:15:20 +0000 (0:00:00.917) 0:00:12.728 ******* 2025-04-09 10:19:43.311325 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.311338 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:43.311350 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:43.311362 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.311375 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.311387 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:43.311406 | orchestrator | 2025-04-09 10:19:43.311419 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-04-09 10:19:43.311432 | orchestrator | Wednesday 09 April 2025 10:15:27 +0000 (0:00:06.478) 0:00:19.206 ******* 2025-04-09 10:19:43.311444 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.311457 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.311469 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.311482 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.311494 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.311507 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.311519 | orchestrator | 2025-04-09 10:19:43.311532 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-04-09 10:19:43.311544 | orchestrator | Wednesday 09 April 2025 10:15:28 +0000 (0:00:01.210) 0:00:20.417 ******* 2025-04-09 10:19:43.311557 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.311569 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.311582 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.311594 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.311607 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.311619 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.311705 | orchestrator | 2025-04-09 10:19:43.311719 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-04-09 10:19:43.311733 | orchestrator | Wednesday 09 April 2025 10:15:30 +0000 (0:00:01.523) 0:00:21.940 ******* 2025-04-09 10:19:43.311746 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.311758 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.311771 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.311783 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.311800 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.311813 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.311825 | orchestrator | 2025-04-09 10:19:43.311838 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-04-09 10:19:43.311850 | orchestrator | Wednesday 09 April 2025 10:15:30 +0000 (0:00:00.477) 0:00:22.418 ******* 2025-04-09 10:19:43.311863 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-04-09 10:19:43.311880 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-04-09 10:19:43.311892 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.311905 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-04-09 10:19:43.311917 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-04-09 10:19:43.311930 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.311943 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-04-09 10:19:43.311955 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-04-09 10:19:43.311968 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.311980 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-04-09 10:19:43.311993 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-04-09 10:19:43.312005 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.312018 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-04-09 10:19:43.312030 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-04-09 10:19:43.312043 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.312055 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-04-09 10:19:43.312068 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-04-09 10:19:43.312080 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.312093 | orchestrator | 2025-04-09 10:19:43.312106 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-04-09 10:19:43.312125 | orchestrator | Wednesday 09 April 2025 10:15:31 +0000 (0:00:00.934) 0:00:23.353 ******* 2025-04-09 10:19:43.312138 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.312151 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.312169 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.312182 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.312194 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.312207 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.312219 | orchestrator | 2025-04-09 10:19:43.312232 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-04-09 10:19:43.312244 | orchestrator | 2025-04-09 10:19:43.312257 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-04-09 10:19:43.312331 | orchestrator | Wednesday 09 April 2025 10:15:33 +0000 (0:00:01.895) 0:00:25.248 ******* 2025-04-09 10:19:43.312346 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.312359 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.312372 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.312384 | orchestrator | 2025-04-09 10:19:43.312397 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-04-09 10:19:43.312410 | orchestrator | Wednesday 09 April 2025 10:15:35 +0000 (0:00:02.012) 0:00:27.260 ******* 2025-04-09 10:19:43.312422 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.312435 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.312447 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.312459 | orchestrator | 2025-04-09 10:19:43.312472 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-04-09 10:19:43.312484 | orchestrator | Wednesday 09 April 2025 10:15:37 +0000 (0:00:01.730) 0:00:28.991 ******* 2025-04-09 10:19:43.312497 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.312509 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.312522 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.312534 | orchestrator | 2025-04-09 10:19:43.312547 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-04-09 10:19:43.312560 | orchestrator | Wednesday 09 April 2025 10:15:38 +0000 (0:00:01.724) 0:00:30.715 ******* 2025-04-09 10:19:43.312572 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.312584 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.312597 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.312609 | orchestrator | 2025-04-09 10:19:43.312622 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-04-09 10:19:43.312635 | orchestrator | Wednesday 09 April 2025 10:15:39 +0000 (0:00:00.978) 0:00:31.693 ******* 2025-04-09 10:19:43.312648 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.312660 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.312673 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.312686 | orchestrator | 2025-04-09 10:19:43.312698 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-04-09 10:19:43.312711 | orchestrator | Wednesday 09 April 2025 10:15:40 +0000 (0:00:00.617) 0:00:32.311 ******* 2025-04-09 10:19:43.312724 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:19:43.312737 | orchestrator | 2025-04-09 10:19:43.312749 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-04-09 10:19:43.312762 | orchestrator | Wednesday 09 April 2025 10:15:42 +0000 (0:00:01.507) 0:00:33.818 ******* 2025-04-09 10:19:43.312774 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.312787 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.312799 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.312812 | orchestrator | 2025-04-09 10:19:43.312825 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-04-09 10:19:43.312837 | orchestrator | Wednesday 09 April 2025 10:15:45 +0000 (0:00:03.753) 0:00:37.571 ******* 2025-04-09 10:19:43.312850 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.312863 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.312875 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.312888 | orchestrator | 2025-04-09 10:19:43.312900 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-04-09 10:19:43.312913 | orchestrator | Wednesday 09 April 2025 10:15:47 +0000 (0:00:01.477) 0:00:39.049 ******* 2025-04-09 10:19:43.312932 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.312945 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.312958 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.312970 | orchestrator | 2025-04-09 10:19:43.312982 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-04-09 10:19:43.312995 | orchestrator | Wednesday 09 April 2025 10:15:48 +0000 (0:00:00.976) 0:00:40.026 ******* 2025-04-09 10:19:43.313008 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.313020 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.313033 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.313045 | orchestrator | 2025-04-09 10:19:43.313058 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-04-09 10:19:43.313070 | orchestrator | Wednesday 09 April 2025 10:15:51 +0000 (0:00:02.826) 0:00:42.853 ******* 2025-04-09 10:19:43.313083 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.313095 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.313108 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.313120 | orchestrator | 2025-04-09 10:19:43.313133 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-04-09 10:19:43.313145 | orchestrator | Wednesday 09 April 2025 10:15:51 +0000 (0:00:00.571) 0:00:43.424 ******* 2025-04-09 10:19:43.313158 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.313170 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.313183 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.313195 | orchestrator | 2025-04-09 10:19:43.313208 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-04-09 10:19:43.313221 | orchestrator | Wednesday 09 April 2025 10:15:52 +0000 (0:00:00.523) 0:00:43.947 ******* 2025-04-09 10:19:43.313233 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.313246 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.313258 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.313286 | orchestrator | 2025-04-09 10:19:43.313299 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-04-09 10:19:43.313313 | orchestrator | Wednesday 09 April 2025 10:15:54 +0000 (0:00:02.010) 0:00:45.957 ******* 2025-04-09 10:19:43.313336 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-04-09 10:19:43.313350 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-04-09 10:19:43.313363 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-04-09 10:19:43.313376 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-04-09 10:19:43.313388 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-04-09 10:19:43.313401 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-04-09 10:19:43.313413 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-04-09 10:19:43.313426 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-04-09 10:19:43.313438 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-04-09 10:19:43.313451 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-04-09 10:19:43.313469 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-04-09 10:19:43.313488 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-04-09 10:19:43.313500 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-04-09 10:19:43.313513 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-04-09 10:19:43.313526 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-04-09 10:19:43.313538 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.313555 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.313568 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.313580 | orchestrator | 2025-04-09 10:19:43.313593 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-04-09 10:19:43.313606 | orchestrator | Wednesday 09 April 2025 10:16:49 +0000 (0:00:55.731) 0:01:41.689 ******* 2025-04-09 10:19:43.313618 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.313631 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.313643 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.313656 | orchestrator | 2025-04-09 10:19:43.313672 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-04-09 10:19:43.313685 | orchestrator | Wednesday 09 April 2025 10:16:50 +0000 (0:00:00.424) 0:01:42.114 ******* 2025-04-09 10:19:43.313698 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.313710 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.313723 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.313735 | orchestrator | 2025-04-09 10:19:43.313748 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-04-09 10:19:43.313761 | orchestrator | Wednesday 09 April 2025 10:16:51 +0000 (0:00:01.130) 0:01:43.245 ******* 2025-04-09 10:19:43.313773 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.313786 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.313798 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.313811 | orchestrator | 2025-04-09 10:19:43.313823 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-04-09 10:19:43.313836 | orchestrator | Wednesday 09 April 2025 10:16:52 +0000 (0:00:01.552) 0:01:44.797 ******* 2025-04-09 10:19:43.313849 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.313861 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.313874 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.313886 | orchestrator | 2025-04-09 10:19:43.313899 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-04-09 10:19:43.313911 | orchestrator | Wednesday 09 April 2025 10:17:07 +0000 (0:00:14.744) 0:01:59.541 ******* 2025-04-09 10:19:43.313924 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.313936 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.313948 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.313961 | orchestrator | 2025-04-09 10:19:43.313973 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-04-09 10:19:43.313986 | orchestrator | Wednesday 09 April 2025 10:17:08 +0000 (0:00:00.794) 0:02:00.335 ******* 2025-04-09 10:19:43.313998 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.314011 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.314060 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.314073 | orchestrator | 2025-04-09 10:19:43.314086 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-04-09 10:19:43.314098 | orchestrator | Wednesday 09 April 2025 10:17:09 +0000 (0:00:00.718) 0:02:01.054 ******* 2025-04-09 10:19:43.314111 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.314123 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.314136 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.314154 | orchestrator | 2025-04-09 10:19:43.314173 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-04-09 10:19:43.314186 | orchestrator | Wednesday 09 April 2025 10:17:09 +0000 (0:00:00.625) 0:02:01.679 ******* 2025-04-09 10:19:43.314198 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.314211 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.314223 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.314236 | orchestrator | 2025-04-09 10:19:43.314248 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-04-09 10:19:43.314261 | orchestrator | Wednesday 09 April 2025 10:17:10 +0000 (0:00:01.042) 0:02:02.722 ******* 2025-04-09 10:19:43.314289 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.314302 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.314314 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.314327 | orchestrator | 2025-04-09 10:19:43.314339 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-04-09 10:19:43.314352 | orchestrator | Wednesday 09 April 2025 10:17:11 +0000 (0:00:00.315) 0:02:03.037 ******* 2025-04-09 10:19:43.314365 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.314377 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.314390 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.314402 | orchestrator | 2025-04-09 10:19:43.314415 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-04-09 10:19:43.314428 | orchestrator | Wednesday 09 April 2025 10:17:11 +0000 (0:00:00.627) 0:02:03.664 ******* 2025-04-09 10:19:43.314440 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.314453 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.314465 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.314477 | orchestrator | 2025-04-09 10:19:43.314490 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-04-09 10:19:43.314503 | orchestrator | Wednesday 09 April 2025 10:17:12 +0000 (0:00:00.679) 0:02:04.344 ******* 2025-04-09 10:19:43.314515 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.314528 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.314540 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.314553 | orchestrator | 2025-04-09 10:19:43.314565 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-04-09 10:19:43.314578 | orchestrator | Wednesday 09 April 2025 10:17:13 +0000 (0:00:01.238) 0:02:05.582 ******* 2025-04-09 10:19:43.314591 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:19:43.314603 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:19:43.314616 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:19:43.314628 | orchestrator | 2025-04-09 10:19:43.314640 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-04-09 10:19:43.314653 | orchestrator | Wednesday 09 April 2025 10:17:14 +0000 (0:00:00.872) 0:02:06.455 ******* 2025-04-09 10:19:43.314666 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.314678 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.314690 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.314703 | orchestrator | 2025-04-09 10:19:43.314715 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-04-09 10:19:43.314727 | orchestrator | Wednesday 09 April 2025 10:17:14 +0000 (0:00:00.290) 0:02:06.745 ******* 2025-04-09 10:19:43.314740 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.314753 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.314765 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.314778 | orchestrator | 2025-04-09 10:19:43.314790 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-04-09 10:19:43.314803 | orchestrator | Wednesday 09 April 2025 10:17:15 +0000 (0:00:00.286) 0:02:07.032 ******* 2025-04-09 10:19:43.314815 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.314828 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.314840 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.314852 | orchestrator | 2025-04-09 10:19:43.314865 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-04-09 10:19:43.314883 | orchestrator | Wednesday 09 April 2025 10:17:16 +0000 (0:00:01.229) 0:02:08.262 ******* 2025-04-09 10:19:43.314896 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.314909 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.314924 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.314938 | orchestrator | 2025-04-09 10:19:43.314951 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-04-09 10:19:43.314968 | orchestrator | Wednesday 09 April 2025 10:17:17 +0000 (0:00:00.681) 0:02:08.943 ******* 2025-04-09 10:19:43.314982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-04-09 10:19:43.314995 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-04-09 10:19:43.315008 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-04-09 10:19:43.315024 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-04-09 10:19:43.315038 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-04-09 10:19:43.315050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-04-09 10:19:43.315063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-04-09 10:19:43.315076 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-04-09 10:19:43.315088 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-04-09 10:19:43.315101 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-04-09 10:19:43.315113 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-04-09 10:19:43.315126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-04-09 10:19:43.315144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-04-09 10:19:43.315157 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-04-09 10:19:43.315169 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-04-09 10:19:43.315181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-04-09 10:19:43.315194 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-04-09 10:19:43.315207 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-04-09 10:19:43.315219 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-04-09 10:19:43.315232 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-04-09 10:19:43.315244 | orchestrator | 2025-04-09 10:19:43.315257 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-04-09 10:19:43.315335 | orchestrator | 2025-04-09 10:19:43.315349 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-04-09 10:19:43.315362 | orchestrator | Wednesday 09 April 2025 10:17:20 +0000 (0:00:03.120) 0:02:12.064 ******* 2025-04-09 10:19:43.315375 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:43.315388 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:43.315401 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:43.315413 | orchestrator | 2025-04-09 10:19:43.315426 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-04-09 10:19:43.315439 | orchestrator | Wednesday 09 April 2025 10:17:20 +0000 (0:00:00.577) 0:02:12.642 ******* 2025-04-09 10:19:43.315451 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:43.315470 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:43.315483 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:43.315496 | orchestrator | 2025-04-09 10:19:43.315508 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-04-09 10:19:43.315521 | orchestrator | Wednesday 09 April 2025 10:17:21 +0000 (0:00:00.648) 0:02:13.290 ******* 2025-04-09 10:19:43.315533 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:43.315546 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:43.315558 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:43.315571 | orchestrator | 2025-04-09 10:19:43.315583 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-04-09 10:19:43.315596 | orchestrator | Wednesday 09 April 2025 10:17:21 +0000 (0:00:00.358) 0:02:13.649 ******* 2025-04-09 10:19:43.315608 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 10:19:43.315621 | orchestrator | 2025-04-09 10:19:43.315633 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-04-09 10:19:43.315646 | orchestrator | Wednesday 09 April 2025 10:17:22 +0000 (0:00:00.774) 0:02:14.424 ******* 2025-04-09 10:19:43.315658 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.315671 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.315684 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.315696 | orchestrator | 2025-04-09 10:19:43.315708 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-04-09 10:19:43.315721 | orchestrator | Wednesday 09 April 2025 10:17:22 +0000 (0:00:00.362) 0:02:14.787 ******* 2025-04-09 10:19:43.315733 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.315746 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.315758 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.315771 | orchestrator | 2025-04-09 10:19:43.315783 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-04-09 10:19:43.315796 | orchestrator | Wednesday 09 April 2025 10:17:23 +0000 (0:00:00.305) 0:02:15.092 ******* 2025-04-09 10:19:43.315809 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.315821 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.315833 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.315846 | orchestrator | 2025-04-09 10:19:43.315858 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-04-09 10:19:43.315871 | orchestrator | Wednesday 09 April 2025 10:17:23 +0000 (0:00:00.326) 0:02:15.419 ******* 2025-04-09 10:19:43.315883 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:43.315896 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:43.315908 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:43.315920 | orchestrator | 2025-04-09 10:19:43.315933 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-04-09 10:19:43.315945 | orchestrator | Wednesday 09 April 2025 10:17:25 +0000 (0:00:01.625) 0:02:17.044 ******* 2025-04-09 10:19:43.315958 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:19:43.315970 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:19:43.315983 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:19:43.315995 | orchestrator | 2025-04-09 10:19:43.316008 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-04-09 10:19:43.316020 | orchestrator | 2025-04-09 10:19:43.316033 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-04-09 10:19:43.316045 | orchestrator | Wednesday 09 April 2025 10:17:34 +0000 (0:00:08.867) 0:02:25.911 ******* 2025-04-09 10:19:43.316057 | orchestrator | ok: [testbed-manager] 2025-04-09 10:19:43.316070 | orchestrator | 2025-04-09 10:19:43.316082 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-04-09 10:19:43.316095 | orchestrator | Wednesday 09 April 2025 10:17:34 +0000 (0:00:00.786) 0:02:26.698 ******* 2025-04-09 10:19:43.316107 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316120 | orchestrator | 2025-04-09 10:19:43.316132 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-04-09 10:19:43.316155 | orchestrator | Wednesday 09 April 2025 10:17:35 +0000 (0:00:00.486) 0:02:27.184 ******* 2025-04-09 10:19:43.316167 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-04-09 10:19:43.316180 | orchestrator | 2025-04-09 10:19:43.316199 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-04-09 10:19:43.316216 | orchestrator | Wednesday 09 April 2025 10:17:36 +0000 (0:00:01.159) 0:02:28.343 ******* 2025-04-09 10:19:43.316229 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316241 | orchestrator | 2025-04-09 10:19:43.316254 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-04-09 10:19:43.316292 | orchestrator | Wednesday 09 April 2025 10:17:37 +0000 (0:00:00.867) 0:02:29.211 ******* 2025-04-09 10:19:43.316306 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316319 | orchestrator | 2025-04-09 10:19:43.316331 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-04-09 10:19:43.316344 | orchestrator | Wednesday 09 April 2025 10:17:38 +0000 (0:00:00.699) 0:02:29.910 ******* 2025-04-09 10:19:43.316356 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-09 10:19:43.316369 | orchestrator | 2025-04-09 10:19:43.316381 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-04-09 10:19:43.316394 | orchestrator | Wednesday 09 April 2025 10:17:39 +0000 (0:00:01.668) 0:02:31.579 ******* 2025-04-09 10:19:43.316406 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-09 10:19:43.316419 | orchestrator | 2025-04-09 10:19:43.316432 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-04-09 10:19:43.316444 | orchestrator | Wednesday 09 April 2025 10:17:40 +0000 (0:00:00.862) 0:02:32.442 ******* 2025-04-09 10:19:43.316457 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316469 | orchestrator | 2025-04-09 10:19:43.316482 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-04-09 10:19:43.316494 | orchestrator | Wednesday 09 April 2025 10:17:41 +0000 (0:00:00.515) 0:02:32.958 ******* 2025-04-09 10:19:43.316507 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316519 | orchestrator | 2025-04-09 10:19:43.316532 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-04-09 10:19:43.316544 | orchestrator | 2025-04-09 10:19:43.316557 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-04-09 10:19:43.316569 | orchestrator | Wednesday 09 April 2025 10:17:41 +0000 (0:00:00.427) 0:02:33.386 ******* 2025-04-09 10:19:43.316582 | orchestrator | ok: [testbed-manager] 2025-04-09 10:19:43.316594 | orchestrator | 2025-04-09 10:19:43.316607 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-04-09 10:19:43.316620 | orchestrator | Wednesday 09 April 2025 10:17:41 +0000 (0:00:00.154) 0:02:33.540 ******* 2025-04-09 10:19:43.316632 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-04-09 10:19:43.316644 | orchestrator | 2025-04-09 10:19:43.316657 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-04-09 10:19:43.316669 | orchestrator | Wednesday 09 April 2025 10:17:41 +0000 (0:00:00.272) 0:02:33.813 ******* 2025-04-09 10:19:43.316682 | orchestrator | ok: [testbed-manager] 2025-04-09 10:19:43.316694 | orchestrator | 2025-04-09 10:19:43.316706 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-04-09 10:19:43.316719 | orchestrator | Wednesday 09 April 2025 10:17:43 +0000 (0:00:01.575) 0:02:35.389 ******* 2025-04-09 10:19:43.316731 | orchestrator | ok: [testbed-manager] 2025-04-09 10:19:43.316744 | orchestrator | 2025-04-09 10:19:43.316756 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-04-09 10:19:43.316769 | orchestrator | Wednesday 09 April 2025 10:17:45 +0000 (0:00:01.734) 0:02:37.123 ******* 2025-04-09 10:19:43.316781 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316794 | orchestrator | 2025-04-09 10:19:43.316806 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-04-09 10:19:43.316818 | orchestrator | Wednesday 09 April 2025 10:17:46 +0000 (0:00:01.062) 0:02:38.186 ******* 2025-04-09 10:19:43.316838 | orchestrator | ok: [testbed-manager] 2025-04-09 10:19:43.316851 | orchestrator | 2025-04-09 10:19:43.316863 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-04-09 10:19:43.316876 | orchestrator | Wednesday 09 April 2025 10:17:46 +0000 (0:00:00.579) 0:02:38.766 ******* 2025-04-09 10:19:43.316888 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316900 | orchestrator | 2025-04-09 10:19:43.316913 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-04-09 10:19:43.316925 | orchestrator | Wednesday 09 April 2025 10:17:56 +0000 (0:00:09.946) 0:02:48.712 ******* 2025-04-09 10:19:43.316938 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.316950 | orchestrator | 2025-04-09 10:19:43.316963 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-04-09 10:19:43.316975 | orchestrator | Wednesday 09 April 2025 10:18:12 +0000 (0:00:15.131) 0:03:03.843 ******* 2025-04-09 10:19:43.316988 | orchestrator | ok: [testbed-manager] 2025-04-09 10:19:43.317000 | orchestrator | 2025-04-09 10:19:43.317013 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-04-09 10:19:43.317025 | orchestrator | 2025-04-09 10:19:43.317038 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-04-09 10:19:43.317055 | orchestrator | Wednesday 09 April 2025 10:18:12 +0000 (0:00:00.502) 0:03:04.346 ******* 2025-04-09 10:19:43.317068 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.317080 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.317092 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.317105 | orchestrator | 2025-04-09 10:19:43.317117 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-04-09 10:19:43.317130 | orchestrator | Wednesday 09 April 2025 10:18:12 +0000 (0:00:00.431) 0:03:04.777 ******* 2025-04-09 10:19:43.317143 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.317155 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.317168 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.317180 | orchestrator | 2025-04-09 10:19:43.317193 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-04-09 10:19:43.317205 | orchestrator | Wednesday 09 April 2025 10:18:13 +0000 (0:00:00.297) 0:03:05.075 ******* 2025-04-09 10:19:43.317218 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:19:43.317231 | orchestrator | 2025-04-09 10:19:43.317243 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-04-09 10:19:43.317261 | orchestrator | Wednesday 09 April 2025 10:18:13 +0000 (0:00:00.535) 0:03:05.610 ******* 2025-04-09 10:19:43.317288 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-09 10:19:43.317302 | orchestrator | 2025-04-09 10:19:43.317314 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-04-09 10:19:43.317327 | orchestrator | Wednesday 09 April 2025 10:18:14 +0000 (0:00:00.761) 0:03:06.372 ******* 2025-04-09 10:19:43.317340 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-09 10:19:43.317352 | orchestrator | 2025-04-09 10:19:43.317364 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-04-09 10:19:43.317377 | orchestrator | Wednesday 09 April 2025 10:18:15 +0000 (0:00:00.940) 0:03:07.313 ******* 2025-04-09 10:19:43.317389 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.317402 | orchestrator | 2025-04-09 10:19:43.317415 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-04-09 10:19:43.317427 | orchestrator | Wednesday 09 April 2025 10:18:16 +0000 (0:00:00.563) 0:03:07.876 ******* 2025-04-09 10:19:43.317439 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-09 10:19:43.317452 | orchestrator | 2025-04-09 10:19:43.317465 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-04-09 10:19:43.317477 | orchestrator | Wednesday 09 April 2025 10:18:17 +0000 (0:00:01.076) 0:03:08.952 ******* 2025-04-09 10:19:43.317490 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.317509 | orchestrator | 2025-04-09 10:19:43.317522 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-04-09 10:19:43.317534 | orchestrator | Wednesday 09 April 2025 10:18:17 +0000 (0:00:00.197) 0:03:09.150 ******* 2025-04-09 10:19:43.317547 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.317564 | orchestrator | 2025-04-09 10:19:43.317576 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-04-09 10:19:43.317589 | orchestrator | Wednesday 09 April 2025 10:18:17 +0000 (0:00:00.229) 0:03:09.380 ******* 2025-04-09 10:19:43.317601 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.317614 | orchestrator | 2025-04-09 10:19:43.317627 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-04-09 10:19:43.317639 | orchestrator | Wednesday 09 April 2025 10:18:17 +0000 (0:00:00.277) 0:03:09.657 ******* 2025-04-09 10:19:43.317651 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.317664 | orchestrator | 2025-04-09 10:19:43.317676 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-04-09 10:19:43.317689 | orchestrator | Wednesday 09 April 2025 10:18:18 +0000 (0:00:00.255) 0:03:09.913 ******* 2025-04-09 10:19:43.317701 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-09 10:19:43.317714 | orchestrator | 2025-04-09 10:19:43.317726 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-04-09 10:19:43.317739 | orchestrator | Wednesday 09 April 2025 10:18:23 +0000 (0:00:05.542) 0:03:15.456 ******* 2025-04-09 10:19:43.317751 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-04-09 10:19:43.317764 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-04-09 10:19:43.317776 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-04-09 10:19:43.317789 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-04-09 10:19:43.317801 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-04-09 10:19:43.317814 | orchestrator | 2025-04-09 10:19:43.317826 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-04-09 10:19:43.317839 | orchestrator | Wednesday 09 April 2025 10:19:06 +0000 (0:00:42.994) 0:03:58.450 ******* 2025-04-09 10:19:43.317851 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-09 10:19:43.317864 | orchestrator | 2025-04-09 10:19:43.317877 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-04-09 10:19:43.317889 | orchestrator | Wednesday 09 April 2025 10:19:08 +0000 (0:00:01.849) 0:04:00.299 ******* 2025-04-09 10:19:43.317902 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-09 10:19:43.317914 | orchestrator | 2025-04-09 10:19:43.317927 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-04-09 10:19:43.317939 | orchestrator | Wednesday 09 April 2025 10:19:10 +0000 (0:00:02.054) 0:04:02.354 ******* 2025-04-09 10:19:43.317952 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-09 10:19:43.317964 | orchestrator | 2025-04-09 10:19:43.317980 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-04-09 10:19:43.317993 | orchestrator | Wednesday 09 April 2025 10:19:12 +0000 (0:00:01.532) 0:04:03.886 ******* 2025-04-09 10:19:43.318006 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.318042 | orchestrator | 2025-04-09 10:19:43.318055 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-04-09 10:19:43.318067 | orchestrator | Wednesday 09 April 2025 10:19:12 +0000 (0:00:00.256) 0:04:04.143 ******* 2025-04-09 10:19:43.318080 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-04-09 10:19:43.318093 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-04-09 10:19:43.318105 | orchestrator | 2025-04-09 10:19:43.318118 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-04-09 10:19:43.318137 | orchestrator | Wednesday 09 April 2025 10:19:15 +0000 (0:00:02.957) 0:04:07.100 ******* 2025-04-09 10:19:43.318150 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.318162 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.318175 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.318187 | orchestrator | 2025-04-09 10:19:43.318200 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-04-09 10:19:43.318212 | orchestrator | Wednesday 09 April 2025 10:19:15 +0000 (0:00:00.378) 0:04:07.478 ******* 2025-04-09 10:19:43.318224 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.318242 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.318254 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.318281 | orchestrator | 2025-04-09 10:19:43.318301 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-04-09 10:19:43.318314 | orchestrator | 2025-04-09 10:19:43.318327 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-04-09 10:19:43.318339 | orchestrator | Wednesday 09 April 2025 10:19:16 +0000 (0:00:00.975) 0:04:08.454 ******* 2025-04-09 10:19:43.318352 | orchestrator | ok: [testbed-manager] 2025-04-09 10:19:43.318364 | orchestrator | 2025-04-09 10:19:43.318377 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-04-09 10:19:43.318389 | orchestrator | Wednesday 09 April 2025 10:19:16 +0000 (0:00:00.131) 0:04:08.586 ******* 2025-04-09 10:19:43.318402 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-04-09 10:19:43.318414 | orchestrator | 2025-04-09 10:19:43.318427 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-04-09 10:19:43.318439 | orchestrator | Wednesday 09 April 2025 10:19:17 +0000 (0:00:00.400) 0:04:08.987 ******* 2025-04-09 10:19:43.318451 | orchestrator | changed: [testbed-manager] 2025-04-09 10:19:43.318464 | orchestrator | 2025-04-09 10:19:43.318476 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-04-09 10:19:43.318489 | orchestrator | 2025-04-09 10:19:43.318501 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-04-09 10:19:43.318514 | orchestrator | Wednesday 09 April 2025 10:19:23 +0000 (0:00:06.402) 0:04:15.389 ******* 2025-04-09 10:19:43.318526 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:19:43.318539 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:19:43.318552 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:19:43.318564 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:19:43.318577 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:19:43.318589 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:19:43.318601 | orchestrator | 2025-04-09 10:19:43.318614 | orchestrator | TASK [Manage labels] *********************************************************** 2025-04-09 10:19:43.318626 | orchestrator | Wednesday 09 April 2025 10:19:24 +0000 (0:00:00.834) 0:04:16.223 ******* 2025-04-09 10:19:43.318639 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-04-09 10:19:43.318652 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-04-09 10:19:43.318664 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-04-09 10:19:43.318676 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-04-09 10:19:43.318689 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-04-09 10:19:43.318701 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-04-09 10:19:43.318713 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-04-09 10:19:43.318726 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-04-09 10:19:43.318738 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-04-09 10:19:43.318750 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-04-09 10:19:43.318768 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-04-09 10:19:43.318785 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-04-09 10:19:43.318798 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-04-09 10:19:43.318810 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-04-09 10:19:43.318823 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-04-09 10:19:43.318835 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-04-09 10:19:43.318847 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-04-09 10:19:43.318859 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-04-09 10:19:43.318872 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-04-09 10:19:43.318884 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-04-09 10:19:43.318897 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-04-09 10:19:43.318909 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-04-09 10:19:43.318921 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-04-09 10:19:43.318934 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-04-09 10:19:43.318946 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-04-09 10:19:43.318959 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-04-09 10:19:43.318971 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-04-09 10:19:43.318983 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-04-09 10:19:43.318996 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-04-09 10:19:43.319008 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-04-09 10:19:43.319021 | orchestrator | 2025-04-09 10:19:43.319038 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-04-09 10:19:43.319051 | orchestrator | Wednesday 09 April 2025 10:19:40 +0000 (0:00:16.500) 0:04:32.724 ******* 2025-04-09 10:19:43.319063 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.319076 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.319088 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.319100 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.319113 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.319125 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.319137 | orchestrator | 2025-04-09 10:19:43.319150 | orchestrator | TASK [Manage taints] *********************************************************** 2025-04-09 10:19:43.319163 | orchestrator | Wednesday 09 April 2025 10:19:41 +0000 (0:00:00.593) 0:04:33.317 ******* 2025-04-09 10:19:43.319175 | orchestrator | skipping: [testbed-node-3] 2025-04-09 10:19:43.319187 | orchestrator | skipping: [testbed-node-4] 2025-04-09 10:19:43.319200 | orchestrator | skipping: [testbed-node-5] 2025-04-09 10:19:43.319212 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:19:43.319224 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:19:43.319237 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:19:43.319249 | orchestrator | 2025-04-09 10:19:43.319261 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:19:43.319290 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:19:43.319304 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-09 10:19:43.319323 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-04-09 10:19:43.319336 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-04-09 10:19:43.319349 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-09 10:19:43.319361 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-09 10:19:43.319374 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-09 10:19:43.319387 | orchestrator | 2025-04-09 10:19:43.319399 | orchestrator | 2025-04-09 10:19:43.319412 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:19:43.319424 | orchestrator | Wednesday 09 April 2025 10:19:42 +0000 (0:00:00.675) 0:04:33.992 ******* 2025-04-09 10:19:43.319437 | orchestrator | =============================================================================== 2025-04-09 10:19:43.319449 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.73s 2025-04-09 10:19:43.319462 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.99s 2025-04-09 10:19:43.319474 | orchestrator | Manage labels ---------------------------------------------------------- 16.50s 2025-04-09 10:19:43.319487 | orchestrator | kubectl : Install required packages ------------------------------------ 15.13s 2025-04-09 10:19:43.319499 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.74s 2025-04-09 10:19:43.319512 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.95s 2025-04-09 10:19:43.319547 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.87s 2025-04-09 10:19:43.319560 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.48s 2025-04-09 10:19:43.319573 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.40s 2025-04-09 10:19:43.319586 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.54s 2025-04-09 10:19:43.319598 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.75s 2025-04-09 10:19:43.319611 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.12s 2025-04-09 10:19:43.319623 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.96s 2025-04-09 10:19:43.319636 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.83s 2025-04-09 10:19:43.319649 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.58s 2025-04-09 10:19:43.319661 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.05s 2025-04-09 10:19:43.319674 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.01s 2025-04-09 10:19:43.319687 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.01s 2025-04-09 10:19:43.319699 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.90s 2025-04-09 10:19:43.319711 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 1.85s 2025-04-09 10:19:43.319724 | orchestrator | 2025-04-09 10:19:43 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:43.319742 | orchestrator | 2025-04-09 10:19:43 | INFO  | Task 561bef2c-4e01-4190-b5ed-7ef8e7ad2413 is in state SUCCESS 2025-04-09 10:19:46.340071 | orchestrator | 2025-04-09 10:19:43 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:46.340365 | orchestrator | 2025-04-09 10:19:46 | INFO  | Task ce36b379-cb55-426a-ac49-b3e24f897755 is in state STARTED 2025-04-09 10:19:46.340888 | orchestrator | 2025-04-09 10:19:46 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:46.340918 | orchestrator | 2025-04-09 10:19:46 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:46.341498 | orchestrator | 2025-04-09 10:19:46 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:46.342203 | orchestrator | 2025-04-09 10:19:46 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:46.342868 | orchestrator | 2025-04-09 10:19:46 | INFO  | Task 1dc6be3b-3518-4816-be21-39f052fd8ff8 is in state STARTED 2025-04-09 10:19:46.343189 | orchestrator | 2025-04-09 10:19:46 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:49.380381 | orchestrator | 2025-04-09 10:19:49 | INFO  | Task ce36b379-cb55-426a-ac49-b3e24f897755 is in state STARTED 2025-04-09 10:19:49.381152 | orchestrator | 2025-04-09 10:19:49 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:49.381184 | orchestrator | 2025-04-09 10:19:49 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:49.381199 | orchestrator | 2025-04-09 10:19:49 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:49.381214 | orchestrator | 2025-04-09 10:19:49 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:49.381234 | orchestrator | 2025-04-09 10:19:49 | INFO  | Task 1dc6be3b-3518-4816-be21-39f052fd8ff8 is in state STARTED 2025-04-09 10:19:52.410060 | orchestrator | 2025-04-09 10:19:49 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:52.410192 | orchestrator | 2025-04-09 10:19:52 | INFO  | Task ce36b379-cb55-426a-ac49-b3e24f897755 is in state STARTED 2025-04-09 10:19:52.410613 | orchestrator | 2025-04-09 10:19:52 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:52.415894 | orchestrator | 2025-04-09 10:19:52 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:52.424835 | orchestrator | 2025-04-09 10:19:52 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:52.430427 | orchestrator | 2025-04-09 10:19:52 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:55.462161 | orchestrator | 2025-04-09 10:19:52 | INFO  | Task 1dc6be3b-3518-4816-be21-39f052fd8ff8 is in state SUCCESS 2025-04-09 10:19:55.462355 | orchestrator | 2025-04-09 10:19:52 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:55.462397 | orchestrator | 2025-04-09 10:19:55 | INFO  | Task ce36b379-cb55-426a-ac49-b3e24f897755 is in state SUCCESS 2025-04-09 10:19:55.462749 | orchestrator | 2025-04-09 10:19:55 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:55.462784 | orchestrator | 2025-04-09 10:19:55 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:55.463661 | orchestrator | 2025-04-09 10:19:55 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:55.464941 | orchestrator | 2025-04-09 10:19:55 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:58.509535 | orchestrator | 2025-04-09 10:19:55 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:19:58.509706 | orchestrator | 2025-04-09 10:19:58 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:19:58.510323 | orchestrator | 2025-04-09 10:19:58 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:19:58.510363 | orchestrator | 2025-04-09 10:19:58 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:19:58.510993 | orchestrator | 2025-04-09 10:19:58 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:19:58.511200 | orchestrator | 2025-04-09 10:19:58 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:01.554936 | orchestrator | 2025-04-09 10:20:01 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:01.556610 | orchestrator | 2025-04-09 10:20:01 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:01.559791 | orchestrator | 2025-04-09 10:20:01 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:01.561101 | orchestrator | 2025-04-09 10:20:01 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:04.612154 | orchestrator | 2025-04-09 10:20:01 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:04.612358 | orchestrator | 2025-04-09 10:20:04 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:04.614600 | orchestrator | 2025-04-09 10:20:04 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:04.618118 | orchestrator | 2025-04-09 10:20:04 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:04.624728 | orchestrator | 2025-04-09 10:20:04 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:04.625871 | orchestrator | 2025-04-09 10:20:04 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:07.678799 | orchestrator | 2025-04-09 10:20:07 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:07.681393 | orchestrator | 2025-04-09 10:20:07 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:07.685794 | orchestrator | 2025-04-09 10:20:07 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:07.686463 | orchestrator | 2025-04-09 10:20:07 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:10.764805 | orchestrator | 2025-04-09 10:20:07 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:10.765000 | orchestrator | 2025-04-09 10:20:10 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:10.767777 | orchestrator | 2025-04-09 10:20:10 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:10.767810 | orchestrator | 2025-04-09 10:20:10 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:10.767831 | orchestrator | 2025-04-09 10:20:10 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:13.816883 | orchestrator | 2025-04-09 10:20:10 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:13.817025 | orchestrator | 2025-04-09 10:20:13 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:13.817348 | orchestrator | 2025-04-09 10:20:13 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:13.818211 | orchestrator | 2025-04-09 10:20:13 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:13.819085 | orchestrator | 2025-04-09 10:20:13 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:16.864947 | orchestrator | 2025-04-09 10:20:13 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:16.865068 | orchestrator | 2025-04-09 10:20:16 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:16.868601 | orchestrator | 2025-04-09 10:20:16 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:16.870078 | orchestrator | 2025-04-09 10:20:16 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:16.871963 | orchestrator | 2025-04-09 10:20:16 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:19.934334 | orchestrator | 2025-04-09 10:20:16 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:19.934463 | orchestrator | 2025-04-09 10:20:19 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:19.937248 | orchestrator | 2025-04-09 10:20:19 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:19.939107 | orchestrator | 2025-04-09 10:20:19 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:19.939142 | orchestrator | 2025-04-09 10:20:19 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:19.939441 | orchestrator | 2025-04-09 10:20:19 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:22.977434 | orchestrator | 2025-04-09 10:20:22 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:22.978535 | orchestrator | 2025-04-09 10:20:22 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:22.978579 | orchestrator | 2025-04-09 10:20:22 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:22.978867 | orchestrator | 2025-04-09 10:20:22 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:26.025537 | orchestrator | 2025-04-09 10:20:22 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:26.025682 | orchestrator | 2025-04-09 10:20:26 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:26.026841 | orchestrator | 2025-04-09 10:20:26 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:26.029180 | orchestrator | 2025-04-09 10:20:26 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:26.030533 | orchestrator | 2025-04-09 10:20:26 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:29.096463 | orchestrator | 2025-04-09 10:20:26 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:29.096595 | orchestrator | 2025-04-09 10:20:29 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:29.100467 | orchestrator | 2025-04-09 10:20:29 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:29.101907 | orchestrator | 2025-04-09 10:20:29 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:29.101949 | orchestrator | 2025-04-09 10:20:29 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:29.104235 | orchestrator | 2025-04-09 10:20:29 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:32.146406 | orchestrator | 2025-04-09 10:20:32 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:32.147266 | orchestrator | 2025-04-09 10:20:32 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:32.147329 | orchestrator | 2025-04-09 10:20:32 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:32.147382 | orchestrator | 2025-04-09 10:20:32 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:35.204682 | orchestrator | 2025-04-09 10:20:32 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:35.204811 | orchestrator | 2025-04-09 10:20:35 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:35.205268 | orchestrator | 2025-04-09 10:20:35 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:35.207040 | orchestrator | 2025-04-09 10:20:35 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:35.208882 | orchestrator | 2025-04-09 10:20:35 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:35.209566 | orchestrator | 2025-04-09 10:20:35 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:38.249575 | orchestrator | 2025-04-09 10:20:38 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:38.251202 | orchestrator | 2025-04-09 10:20:38 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:38.253574 | orchestrator | 2025-04-09 10:20:38 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:38.255694 | orchestrator | 2025-04-09 10:20:38 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:41.305165 | orchestrator | 2025-04-09 10:20:38 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:41.305354 | orchestrator | 2025-04-09 10:20:41 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:41.306611 | orchestrator | 2025-04-09 10:20:41 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state STARTED 2025-04-09 10:20:41.306658 | orchestrator | 2025-04-09 10:20:41 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:41.307346 | orchestrator | 2025-04-09 10:20:41 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:44.335772 | orchestrator | 2025-04-09 10:20:41 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:44.335919 | orchestrator | 2025-04-09 10:20:44 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:44.336334 | orchestrator | 2025-04-09 10:20:44 | INFO  | Task b7f3d5b0-eaaa-437d-9361-643659f91d70 is in state SUCCESS 2025-04-09 10:20:44.337621 | orchestrator | 2025-04-09 10:20:44.337658 | orchestrator | 2025-04-09 10:20:44.337674 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-04-09 10:20:44.337691 | orchestrator | 2025-04-09 10:20:44.337707 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-04-09 10:20:44.337722 | orchestrator | Wednesday 09 April 2025 10:19:46 +0000 (0:00:00.204) 0:00:00.204 ******* 2025-04-09 10:20:44.337738 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-04-09 10:20:44.337754 | orchestrator | 2025-04-09 10:20:44.337769 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-04-09 10:20:44.337784 | orchestrator | Wednesday 09 April 2025 10:19:47 +0000 (0:00:00.782) 0:00:00.987 ******* 2025-04-09 10:20:44.337799 | orchestrator | changed: [testbed-manager] 2025-04-09 10:20:44.337816 | orchestrator | 2025-04-09 10:20:44.337832 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-04-09 10:20:44.337847 | orchestrator | Wednesday 09 April 2025 10:19:48 +0000 (0:00:01.169) 0:00:02.157 ******* 2025-04-09 10:20:44.337862 | orchestrator | changed: [testbed-manager] 2025-04-09 10:20:44.337877 | orchestrator | 2025-04-09 10:20:44.337892 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:20:44.337930 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:20:44.337947 | orchestrator | 2025-04-09 10:20:44.337961 | orchestrator | 2025-04-09 10:20:44.337975 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:20:44.337989 | orchestrator | Wednesday 09 April 2025 10:19:49 +0000 (0:00:00.448) 0:00:02.606 ******* 2025-04-09 10:20:44.338003 | orchestrator | =============================================================================== 2025-04-09 10:20:44.338066 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2025-04-09 10:20:44.338082 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.78s 2025-04-09 10:20:44.338197 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.45s 2025-04-09 10:20:44.338216 | orchestrator | 2025-04-09 10:20:44.338230 | orchestrator | 2025-04-09 10:20:44.338244 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-04-09 10:20:44.338258 | orchestrator | 2025-04-09 10:20:44.338273 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-04-09 10:20:44.338313 | orchestrator | Wednesday 09 April 2025 10:19:46 +0000 (0:00:00.220) 0:00:00.220 ******* 2025-04-09 10:20:44.338328 | orchestrator | ok: [testbed-manager] 2025-04-09 10:20:44.338343 | orchestrator | 2025-04-09 10:20:44.338358 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-04-09 10:20:44.338387 | orchestrator | Wednesday 09 April 2025 10:19:46 +0000 (0:00:00.519) 0:00:00.739 ******* 2025-04-09 10:20:44.338401 | orchestrator | ok: [testbed-manager] 2025-04-09 10:20:44.338416 | orchestrator | 2025-04-09 10:20:44.338430 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-04-09 10:20:44.338444 | orchestrator | Wednesday 09 April 2025 10:19:47 +0000 (0:00:00.540) 0:00:01.280 ******* 2025-04-09 10:20:44.338458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-04-09 10:20:44.338472 | orchestrator | 2025-04-09 10:20:44.338487 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-04-09 10:20:44.338501 | orchestrator | Wednesday 09 April 2025 10:19:48 +0000 (0:00:00.641) 0:00:01.921 ******* 2025-04-09 10:20:44.338515 | orchestrator | changed: [testbed-manager] 2025-04-09 10:20:44.338529 | orchestrator | 2025-04-09 10:20:44.338544 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-04-09 10:20:44.338557 | orchestrator | Wednesday 09 April 2025 10:19:49 +0000 (0:00:01.199) 0:00:03.121 ******* 2025-04-09 10:20:44.338572 | orchestrator | changed: [testbed-manager] 2025-04-09 10:20:44.338586 | orchestrator | 2025-04-09 10:20:44.338599 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-04-09 10:20:44.338613 | orchestrator | Wednesday 09 April 2025 10:19:49 +0000 (0:00:00.577) 0:00:03.699 ******* 2025-04-09 10:20:44.338627 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-09 10:20:44.338641 | orchestrator | 2025-04-09 10:20:44.338655 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-04-09 10:20:44.338669 | orchestrator | Wednesday 09 April 2025 10:19:51 +0000 (0:00:01.493) 0:00:05.192 ******* 2025-04-09 10:20:44.338684 | orchestrator | changed: [testbed-manager -> localhost] 2025-04-09 10:20:44.338698 | orchestrator | 2025-04-09 10:20:44.338712 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-04-09 10:20:44.338726 | orchestrator | Wednesday 09 April 2025 10:19:52 +0000 (0:00:00.951) 0:00:06.143 ******* 2025-04-09 10:20:44.338740 | orchestrator | ok: [testbed-manager] 2025-04-09 10:20:44.338755 | orchestrator | 2025-04-09 10:20:44.338769 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-04-09 10:20:44.338783 | orchestrator | Wednesday 09 April 2025 10:19:52 +0000 (0:00:00.601) 0:00:06.745 ******* 2025-04-09 10:20:44.338798 | orchestrator | ok: [testbed-manager] 2025-04-09 10:20:44.338812 | orchestrator | 2025-04-09 10:20:44.338826 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:20:44.338851 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:20:44.338866 | orchestrator | 2025-04-09 10:20:44.338880 | orchestrator | 2025-04-09 10:20:44.338893 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:20:44.338907 | orchestrator | Wednesday 09 April 2025 10:19:53 +0000 (0:00:00.349) 0:00:07.095 ******* 2025-04-09 10:20:44.338921 | orchestrator | =============================================================================== 2025-04-09 10:20:44.338936 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.49s 2025-04-09 10:20:44.338950 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2025-04-09 10:20:44.338964 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.95s 2025-04-09 10:20:44.338990 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.64s 2025-04-09 10:20:44.339005 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.60s 2025-04-09 10:20:44.339019 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.58s 2025-04-09 10:20:44.339033 | orchestrator | Create .kube directory -------------------------------------------------- 0.54s 2025-04-09 10:20:44.339047 | orchestrator | Get home directory of operator user ------------------------------------- 0.52s 2025-04-09 10:20:44.339061 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2025-04-09 10:20:44.339075 | orchestrator | 2025-04-09 10:20:44.339089 | orchestrator | 2025-04-09 10:20:44.339103 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-09 10:20:44.339118 | orchestrator | 2025-04-09 10:20:44.339132 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-09 10:20:44.339146 | orchestrator | Wednesday 09 April 2025 10:18:12 +0000 (0:00:00.090) 0:00:00.090 ******* 2025-04-09 10:20:44.339160 | orchestrator | ok: [localhost] => { 2025-04-09 10:20:44.339175 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-09 10:20:44.339190 | orchestrator | } 2025-04-09 10:20:44.339204 | orchestrator | 2025-04-09 10:20:44.339218 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-09 10:20:44.339237 | orchestrator | Wednesday 09 April 2025 10:18:12 +0000 (0:00:00.116) 0:00:00.206 ******* 2025-04-09 10:20:44.339253 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-09 10:20:44.339269 | orchestrator | ...ignoring 2025-04-09 10:20:44.339301 | orchestrator | 2025-04-09 10:20:44.339316 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-09 10:20:44.339331 | orchestrator | Wednesday 09 April 2025 10:18:16 +0000 (0:00:03.328) 0:00:03.535 ******* 2025-04-09 10:20:44.339345 | orchestrator | skipping: [localhost] 2025-04-09 10:20:44.339359 | orchestrator | 2025-04-09 10:20:44.339373 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-09 10:20:44.339387 | orchestrator | Wednesday 09 April 2025 10:18:16 +0000 (0:00:00.107) 0:00:03.643 ******* 2025-04-09 10:20:44.339401 | orchestrator | ok: [localhost] 2025-04-09 10:20:44.339415 | orchestrator | 2025-04-09 10:20:44.339429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 10:20:44.339443 | orchestrator | 2025-04-09 10:20:44.339457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-09 10:20:44.339471 | orchestrator | Wednesday 09 April 2025 10:18:16 +0000 (0:00:00.431) 0:00:04.074 ******* 2025-04-09 10:20:44.339485 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:20:44.339499 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:20:44.339513 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:20:44.339527 | orchestrator | 2025-04-09 10:20:44.339542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-09 10:20:44.339556 | orchestrator | Wednesday 09 April 2025 10:18:17 +0000 (0:00:00.485) 0:00:04.559 ******* 2025-04-09 10:20:44.339577 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-09 10:20:44.339591 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-09 10:20:44.339606 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-09 10:20:44.339620 | orchestrator | 2025-04-09 10:20:44.339634 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-09 10:20:44.339648 | orchestrator | 2025-04-09 10:20:44.339662 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-09 10:20:44.339677 | orchestrator | Wednesday 09 April 2025 10:18:18 +0000 (0:00:00.892) 0:00:05.452 ******* 2025-04-09 10:20:44.339691 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:20:44.339705 | orchestrator | 2025-04-09 10:20:44.339720 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-09 10:20:44.339734 | orchestrator | Wednesday 09 April 2025 10:18:19 +0000 (0:00:01.755) 0:00:07.207 ******* 2025-04-09 10:20:44.339748 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:20:44.339762 | orchestrator | 2025-04-09 10:20:44.339776 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-09 10:20:44.339790 | orchestrator | Wednesday 09 April 2025 10:18:21 +0000 (0:00:02.051) 0:00:09.259 ******* 2025-04-09 10:20:44.339804 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.339819 | orchestrator | 2025-04-09 10:20:44.339833 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-09 10:20:44.339847 | orchestrator | Wednesday 09 April 2025 10:18:23 +0000 (0:00:01.285) 0:00:10.544 ******* 2025-04-09 10:20:44.339861 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.339875 | orchestrator | 2025-04-09 10:20:44.339889 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-09 10:20:44.339903 | orchestrator | Wednesday 09 April 2025 10:18:23 +0000 (0:00:00.550) 0:00:11.095 ******* 2025-04-09 10:20:44.340010 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.340102 | orchestrator | 2025-04-09 10:20:44.340120 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-09 10:20:44.340135 | orchestrator | Wednesday 09 April 2025 10:18:24 +0000 (0:00:00.400) 0:00:11.495 ******* 2025-04-09 10:20:44.340149 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.340163 | orchestrator | 2025-04-09 10:20:44.340177 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-09 10:20:44.340191 | orchestrator | Wednesday 09 April 2025 10:18:24 +0000 (0:00:00.471) 0:00:11.966 ******* 2025-04-09 10:20:44.340205 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:20:44.340220 | orchestrator | 2025-04-09 10:20:44.340234 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-09 10:20:44.340256 | orchestrator | Wednesday 09 April 2025 10:18:26 +0000 (0:00:01.770) 0:00:13.737 ******* 2025-04-09 10:20:44.340271 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:20:44.340307 | orchestrator | 2025-04-09 10:20:44.340322 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-09 10:20:44.340336 | orchestrator | Wednesday 09 April 2025 10:18:27 +0000 (0:00:00.982) 0:00:14.719 ******* 2025-04-09 10:20:44.340350 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.340365 | orchestrator | 2025-04-09 10:20:44.340379 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-09 10:20:44.340393 | orchestrator | Wednesday 09 April 2025 10:18:28 +0000 (0:00:00.892) 0:00:15.612 ******* 2025-04-09 10:20:44.340407 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.340428 | orchestrator | 2025-04-09 10:20:44.340442 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-09 10:20:44.340461 | orchestrator | Wednesday 09 April 2025 10:18:29 +0000 (0:00:00.701) 0:00:16.313 ******* 2025-04-09 10:20:44.340480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.340509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.340525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.340540 | orchestrator | 2025-04-09 10:20:44.340554 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-09 10:20:44.340568 | orchestrator | Wednesday 09 April 2025 10:18:30 +0000 (0:00:01.175) 0:00:17.488 ******* 2025-04-09 10:20:44.340593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.340616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.340631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.340646 | orchestrator | 2025-04-09 10:20:44.340661 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-09 10:20:44.340675 | orchestrator | Wednesday 09 April 2025 10:18:32 +0000 (0:00:01.916) 0:00:19.405 ******* 2025-04-09 10:20:44.340689 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-09 10:20:44.340703 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-09 10:20:44.340718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-09 10:20:44.340733 | orchestrator | 2025-04-09 10:20:44.340749 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-09 10:20:44.340765 | orchestrator | Wednesday 09 April 2025 10:18:34 +0000 (0:00:01.917) 0:00:21.322 ******* 2025-04-09 10:20:44.340781 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-09 10:20:44.340797 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-09 10:20:44.340813 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-09 10:20:44.340828 | orchestrator | 2025-04-09 10:20:44.340844 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-09 10:20:44.340866 | orchestrator | Wednesday 09 April 2025 10:18:36 +0000 (0:00:02.094) 0:00:23.417 ******* 2025-04-09 10:20:44.340889 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-09 10:20:44.340905 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-09 10:20:44.340921 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-09 10:20:44.340937 | orchestrator | 2025-04-09 10:20:44.340952 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-09 10:20:44.340968 | orchestrator | Wednesday 09 April 2025 10:18:38 +0000 (0:00:01.941) 0:00:25.359 ******* 2025-04-09 10:20:44.340983 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-09 10:20:44.340999 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-09 10:20:44.341014 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-09 10:20:44.341030 | orchestrator | 2025-04-09 10:20:44.341047 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-09 10:20:44.341062 | orchestrator | Wednesday 09 April 2025 10:18:42 +0000 (0:00:04.188) 0:00:29.547 ******* 2025-04-09 10:20:44.341079 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-09 10:20:44.341094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-09 10:20:44.341109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-09 10:20:44.341123 | orchestrator | 2025-04-09 10:20:44.341137 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-09 10:20:44.341151 | orchestrator | Wednesday 09 April 2025 10:18:45 +0000 (0:00:03.455) 0:00:33.003 ******* 2025-04-09 10:20:44.341165 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-09 10:20:44.341180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-09 10:20:44.341198 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-09 10:20:44.341213 | orchestrator | 2025-04-09 10:20:44.341227 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-09 10:20:44.341241 | orchestrator | Wednesday 09 April 2025 10:18:48 +0000 (0:00:02.408) 0:00:35.412 ******* 2025-04-09 10:20:44.341255 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.341269 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:20:44.341313 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:20:44.341328 | orchestrator | 2025-04-09 10:20:44.341342 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-09 10:20:44.341356 | orchestrator | Wednesday 09 April 2025 10:18:48 +0000 (0:00:00.727) 0:00:36.139 ******* 2025-04-09 10:20:44.341372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.341403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.341419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:20:44.341434 | orchestrator | 2025-04-09 10:20:44.341448 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-09 10:20:44.341462 | orchestrator | Wednesday 09 April 2025 10:18:50 +0000 (0:00:01.558) 0:00:37.698 ******* 2025-04-09 10:20:44.341476 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:20:44.341490 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:20:44.341504 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:20:44.341517 | orchestrator | 2025-04-09 10:20:44.341532 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-09 10:20:44.341546 | orchestrator | Wednesday 09 April 2025 10:18:51 +0000 (0:00:01.022) 0:00:38.720 ******* 2025-04-09 10:20:44.341560 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:20:44.341574 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:20:44.341588 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:20:44.341602 | orchestrator | 2025-04-09 10:20:44.341616 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-09 10:20:44.341630 | orchestrator | Wednesday 09 April 2025 10:18:59 +0000 (0:00:07.945) 0:00:46.666 ******* 2025-04-09 10:20:44.341644 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:20:44.341657 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:20:44.341672 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:20:44.341686 | orchestrator | 2025-04-09 10:20:44.341700 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-09 10:20:44.341714 | orchestrator | 2025-04-09 10:20:44.341728 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-09 10:20:44.341742 | orchestrator | Wednesday 09 April 2025 10:18:59 +0000 (0:00:00.525) 0:00:47.192 ******* 2025-04-09 10:20:44.341756 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:20:44.341770 | orchestrator | 2025-04-09 10:20:44.341784 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-09 10:20:44.341809 | orchestrator | Wednesday 09 April 2025 10:19:00 +0000 (0:00:00.682) 0:00:47.874 ******* 2025-04-09 10:20:44.341824 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:20:44.341838 | orchestrator | 2025-04-09 10:20:44.341852 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-09 10:20:44.341866 | orchestrator | Wednesday 09 April 2025 10:19:01 +0000 (0:00:00.410) 0:00:48.285 ******* 2025-04-09 10:20:44.341880 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:20:44.341894 | orchestrator | 2025-04-09 10:20:44.341908 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-09 10:20:44.341922 | orchestrator | Wednesday 09 April 2025 10:19:03 +0000 (0:00:02.200) 0:00:50.485 ******* 2025-04-09 10:20:44.341936 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:20:44.341950 | orchestrator | 2025-04-09 10:20:44.341964 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-09 10:20:44.341978 | orchestrator | 2025-04-09 10:20:44.341992 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-09 10:20:44.342006 | orchestrator | Wednesday 09 April 2025 10:20:00 +0000 (0:00:57.073) 0:01:47.558 ******* 2025-04-09 10:20:44.342048 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:20:44.342065 | orchestrator | 2025-04-09 10:20:44.342080 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-09 10:20:44.342094 | orchestrator | Wednesday 09 April 2025 10:20:01 +0000 (0:00:00.841) 0:01:48.400 ******* 2025-04-09 10:20:44.342108 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:20:44.342122 | orchestrator | 2025-04-09 10:20:44.342136 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-09 10:20:44.342150 | orchestrator | Wednesday 09 April 2025 10:20:01 +0000 (0:00:00.241) 0:01:48.642 ******* 2025-04-09 10:20:44.342164 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:20:44.342178 | orchestrator | 2025-04-09 10:20:44.342192 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-09 10:20:44.342206 | orchestrator | Wednesday 09 April 2025 10:20:03 +0000 (0:00:01.706) 0:01:50.349 ******* 2025-04-09 10:20:44.342220 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:20:44.342234 | orchestrator | 2025-04-09 10:20:44.342249 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-09 10:20:44.342263 | orchestrator | 2025-04-09 10:20:44.342296 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-09 10:20:44.342312 | orchestrator | Wednesday 09 April 2025 10:20:20 +0000 (0:00:17.163) 0:02:07.512 ******* 2025-04-09 10:20:44.342327 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:20:44.342341 | orchestrator | 2025-04-09 10:20:44.342362 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-09 10:20:44.342377 | orchestrator | Wednesday 09 April 2025 10:20:20 +0000 (0:00:00.587) 0:02:08.100 ******* 2025-04-09 10:20:44.342391 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:20:44.342405 | orchestrator | 2025-04-09 10:20:44.342419 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-09 10:20:44.342433 | orchestrator | Wednesday 09 April 2025 10:20:21 +0000 (0:00:00.297) 0:02:08.398 ******* 2025-04-09 10:20:44.342447 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:20:44.342461 | orchestrator | 2025-04-09 10:20:44.342476 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-09 10:20:44.342490 | orchestrator | Wednesday 09 April 2025 10:20:23 +0000 (0:00:01.965) 0:02:10.364 ******* 2025-04-09 10:20:44.342504 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:20:44.342518 | orchestrator | 2025-04-09 10:20:44.342532 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-09 10:20:44.342546 | orchestrator | 2025-04-09 10:20:44.342560 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-09 10:20:44.342574 | orchestrator | Wednesday 09 April 2025 10:20:38 +0000 (0:00:14.934) 0:02:25.298 ******* 2025-04-09 10:20:44.342588 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:20:44.342616 | orchestrator | 2025-04-09 10:20:44.342630 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-09 10:20:44.342644 | orchestrator | Wednesday 09 April 2025 10:20:38 +0000 (0:00:00.794) 0:02:26.093 ******* 2025-04-09 10:20:44.342659 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-09 10:20:44.342672 | orchestrator | enable_outward_rabbitmq_True 2025-04-09 10:20:44.342687 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-09 10:20:44.342701 | orchestrator | outward_rabbitmq_restart 2025-04-09 10:20:44.342715 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:20:44.342729 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:20:44.342743 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:20:44.342757 | orchestrator | 2025-04-09 10:20:44.342771 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-09 10:20:44.342785 | orchestrator | skipping: no hosts matched 2025-04-09 10:20:44.342799 | orchestrator | 2025-04-09 10:20:44.342813 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-09 10:20:44.342827 | orchestrator | skipping: no hosts matched 2025-04-09 10:20:44.342841 | orchestrator | 2025-04-09 10:20:44.342861 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-09 10:20:44.342875 | orchestrator | skipping: no hosts matched 2025-04-09 10:20:44.342890 | orchestrator | 2025-04-09 10:20:44.342904 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:20:44.342918 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-09 10:20:44.342933 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-09 10:20:44.342947 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:20:44.342961 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-09 10:20:44.342975 | orchestrator | 2025-04-09 10:20:44.342989 | orchestrator | 2025-04-09 10:20:44.343003 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:20:44.343017 | orchestrator | Wednesday 09 April 2025 10:20:41 +0000 (0:00:02.829) 0:02:28.923 ******* 2025-04-09 10:20:44.343031 | orchestrator | =============================================================================== 2025-04-09 10:20:44.343045 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.17s 2025-04-09 10:20:44.343059 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.95s 2025-04-09 10:20:44.343073 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.87s 2025-04-09 10:20:44.343087 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 4.20s 2025-04-09 10:20:44.343100 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 3.45s 2025-04-09 10:20:44.343114 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.33s 2025-04-09 10:20:44.343128 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.83s 2025-04-09 10:20:44.343143 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.41s 2025-04-09 10:20:44.343157 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.11s 2025-04-09 10:20:44.343171 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.09s 2025-04-09 10:20:44.343185 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.05s 2025-04-09 10:20:44.343199 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.94s 2025-04-09 10:20:44.343213 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.92s 2025-04-09 10:20:44.343234 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.92s 2025-04-09 10:20:44.343248 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.77s 2025-04-09 10:20:44.343262 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.76s 2025-04-09 10:20:44.343276 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.56s 2025-04-09 10:20:44.343547 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.29s 2025-04-09 10:20:47.375970 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.18s 2025-04-09 10:20:47.376073 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.02s 2025-04-09 10:20:47.376092 | orchestrator | 2025-04-09 10:20:44 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:47.376108 | orchestrator | 2025-04-09 10:20:44 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:47.376122 | orchestrator | 2025-04-09 10:20:44 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:47.376152 | orchestrator | 2025-04-09 10:20:47 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:47.376706 | orchestrator | 2025-04-09 10:20:47 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:47.378354 | orchestrator | 2025-04-09 10:20:47 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:50.427246 | orchestrator | 2025-04-09 10:20:47 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:50.427411 | orchestrator | 2025-04-09 10:20:50 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:50.429335 | orchestrator | 2025-04-09 10:20:50 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:53.473747 | orchestrator | 2025-04-09 10:20:50 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:53.473848 | orchestrator | 2025-04-09 10:20:50 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:53.473876 | orchestrator | 2025-04-09 10:20:53 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:53.476698 | orchestrator | 2025-04-09 10:20:53 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:53.479099 | orchestrator | 2025-04-09 10:20:53 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:53.481204 | orchestrator | 2025-04-09 10:20:53 | INFO  | Task 1b4409d1-2845-417e-b016-f292505857e7 is in state STARTED 2025-04-09 10:20:53.481590 | orchestrator | 2025-04-09 10:20:53 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:56.538229 | orchestrator | 2025-04-09 10:20:56 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:56.539969 | orchestrator | 2025-04-09 10:20:56 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:56.540010 | orchestrator | 2025-04-09 10:20:56 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:56.540635 | orchestrator | 2025-04-09 10:20:56 | INFO  | Task 1b4409d1-2845-417e-b016-f292505857e7 is in state STARTED 2025-04-09 10:20:59.579141 | orchestrator | 2025-04-09 10:20:56 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:20:59.579437 | orchestrator | 2025-04-09 10:20:59 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:20:59.581351 | orchestrator | 2025-04-09 10:20:59 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:20:59.581421 | orchestrator | 2025-04-09 10:20:59 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:20:59.581865 | orchestrator | 2025-04-09 10:20:59 | INFO  | Task 1b4409d1-2845-417e-b016-f292505857e7 is in state STARTED 2025-04-09 10:20:59.581990 | orchestrator | 2025-04-09 10:20:59 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:02.622571 | orchestrator | 2025-04-09 10:21:02 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:05.664436 | orchestrator | 2025-04-09 10:21:02 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:05.664593 | orchestrator | 2025-04-09 10:21:02 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:05.664613 | orchestrator | 2025-04-09 10:21:02 | INFO  | Task 1b4409d1-2845-417e-b016-f292505857e7 is in state STARTED 2025-04-09 10:21:05.664629 | orchestrator | 2025-04-09 10:21:02 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:05.664664 | orchestrator | 2025-04-09 10:21:05 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:05.665031 | orchestrator | 2025-04-09 10:21:05 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:05.665065 | orchestrator | 2025-04-09 10:21:05 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:05.665659 | orchestrator | 2025-04-09 10:21:05 | INFO  | Task 1b4409d1-2845-417e-b016-f292505857e7 is in state STARTED 2025-04-09 10:21:08.707565 | orchestrator | 2025-04-09 10:21:05 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:08.707752 | orchestrator | 2025-04-09 10:21:08 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:08.713795 | orchestrator | 2025-04-09 10:21:08 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:11.756403 | orchestrator | 2025-04-09 10:21:08 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:11.756638 | orchestrator | 2025-04-09 10:21:08 | INFO  | Task 1b4409d1-2845-417e-b016-f292505857e7 is in state SUCCESS 2025-04-09 10:21:11.756662 | orchestrator | 2025-04-09 10:21:08 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:11.756698 | orchestrator | 2025-04-09 10:21:11 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:11.757483 | orchestrator | 2025-04-09 10:21:11 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:11.757517 | orchestrator | 2025-04-09 10:21:11 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:14.810705 | orchestrator | 2025-04-09 10:21:11 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:14.810878 | orchestrator | 2025-04-09 10:21:14 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:14.813096 | orchestrator | 2025-04-09 10:21:14 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:14.815545 | orchestrator | 2025-04-09 10:21:14 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:14.815785 | orchestrator | 2025-04-09 10:21:14 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:17.872047 | orchestrator | 2025-04-09 10:21:17 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:17.874357 | orchestrator | 2025-04-09 10:21:17 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:17.877451 | orchestrator | 2025-04-09 10:21:17 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:20.925505 | orchestrator | 2025-04-09 10:21:17 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:20.925665 | orchestrator | 2025-04-09 10:21:20 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:20.927254 | orchestrator | 2025-04-09 10:21:20 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:20.931073 | orchestrator | 2025-04-09 10:21:20 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:23.979092 | orchestrator | 2025-04-09 10:21:20 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:23.979218 | orchestrator | 2025-04-09 10:21:23 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:23.980542 | orchestrator | 2025-04-09 10:21:23 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:23.982373 | orchestrator | 2025-04-09 10:21:23 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:27.037028 | orchestrator | 2025-04-09 10:21:23 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:27.037169 | orchestrator | 2025-04-09 10:21:27 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:27.038711 | orchestrator | 2025-04-09 10:21:27 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:27.041281 | orchestrator | 2025-04-09 10:21:27 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:27.041593 | orchestrator | 2025-04-09 10:21:27 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:30.094791 | orchestrator | 2025-04-09 10:21:30 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:30.097005 | orchestrator | 2025-04-09 10:21:30 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:30.101573 | orchestrator | 2025-04-09 10:21:30 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:33.152681 | orchestrator | 2025-04-09 10:21:30 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:33.152809 | orchestrator | 2025-04-09 10:21:33 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:33.153013 | orchestrator | 2025-04-09 10:21:33 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:33.153990 | orchestrator | 2025-04-09 10:21:33 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:36.202354 | orchestrator | 2025-04-09 10:21:33 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:36.202473 | orchestrator | 2025-04-09 10:21:36 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:36.203383 | orchestrator | 2025-04-09 10:21:36 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:36.204712 | orchestrator | 2025-04-09 10:21:36 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:39.262080 | orchestrator | 2025-04-09 10:21:36 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:39.262204 | orchestrator | 2025-04-09 10:21:39 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:39.262498 | orchestrator | 2025-04-09 10:21:39 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:39.264376 | orchestrator | 2025-04-09 10:21:39 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:42.324708 | orchestrator | 2025-04-09 10:21:39 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:42.324827 | orchestrator | 2025-04-09 10:21:42 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:42.325375 | orchestrator | 2025-04-09 10:21:42 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:42.328212 | orchestrator | 2025-04-09 10:21:42 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:45.374495 | orchestrator | 2025-04-09 10:21:42 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:45.374636 | orchestrator | 2025-04-09 10:21:45 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:45.375060 | orchestrator | 2025-04-09 10:21:45 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:45.375523 | orchestrator | 2025-04-09 10:21:45 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:48.422887 | orchestrator | 2025-04-09 10:21:45 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:48.423023 | orchestrator | 2025-04-09 10:21:48 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:48.423490 | orchestrator | 2025-04-09 10:21:48 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:48.424420 | orchestrator | 2025-04-09 10:21:48 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:51.467054 | orchestrator | 2025-04-09 10:21:48 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:51.467179 | orchestrator | 2025-04-09 10:21:51 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:54.530281 | orchestrator | 2025-04-09 10:21:51 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:54.530428 | orchestrator | 2025-04-09 10:21:51 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:54.530445 | orchestrator | 2025-04-09 10:21:51 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:54.530476 | orchestrator | 2025-04-09 10:21:54 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:54.532136 | orchestrator | 2025-04-09 10:21:54 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:54.534655 | orchestrator | 2025-04-09 10:21:54 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:57.586935 | orchestrator | 2025-04-09 10:21:54 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:21:57.587065 | orchestrator | 2025-04-09 10:21:57 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:21:57.588901 | orchestrator | 2025-04-09 10:21:57 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:21:57.591100 | orchestrator | 2025-04-09 10:21:57 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:21:57.591432 | orchestrator | 2025-04-09 10:21:57 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:00.626886 | orchestrator | 2025-04-09 10:22:00 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:22:00.627076 | orchestrator | 2025-04-09 10:22:00 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:00.627871 | orchestrator | 2025-04-09 10:22:00 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:03.663520 | orchestrator | 2025-04-09 10:22:00 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:03.663683 | orchestrator | 2025-04-09 10:22:03 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:22:03.664606 | orchestrator | 2025-04-09 10:22:03 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:03.664642 | orchestrator | 2025-04-09 10:22:03 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:06.710619 | orchestrator | 2025-04-09 10:22:03 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:06.710756 | orchestrator | 2025-04-09 10:22:06 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:22:09.757291 | orchestrator | 2025-04-09 10:22:06 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:09.757454 | orchestrator | 2025-04-09 10:22:06 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:09.757492 | orchestrator | 2025-04-09 10:22:06 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:09.757529 | orchestrator | 2025-04-09 10:22:09 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:22:09.759033 | orchestrator | 2025-04-09 10:22:09 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:09.760095 | orchestrator | 2025-04-09 10:22:09 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:12.801854 | orchestrator | 2025-04-09 10:22:09 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:12.802069 | orchestrator | 2025-04-09 10:22:12 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:22:12.803797 | orchestrator | 2025-04-09 10:22:12 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:12.803834 | orchestrator | 2025-04-09 10:22:12 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:15.847901 | orchestrator | 2025-04-09 10:22:12 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:15.848060 | orchestrator | 2025-04-09 10:22:15 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state STARTED 2025-04-09 10:22:15.848293 | orchestrator | 2025-04-09 10:22:15 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:15.849098 | orchestrator | 2025-04-09 10:22:15 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:18.894589 | orchestrator | 2025-04-09 10:22:15 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:18.894746 | orchestrator | 2025-04-09 10:22:18 | INFO  | Task c036bb65-c944-4fc9-92f5-46f084847b01 is in state SUCCESS 2025-04-09 10:22:18.895629 | orchestrator | 2025-04-09 10:22:18.895663 | orchestrator | None 2025-04-09 10:22:18.895677 | orchestrator | 2025-04-09 10:22:18.895690 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 10:22:18.895704 | orchestrator | 2025-04-09 10:22:18.895718 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-09 10:22:18.895731 | orchestrator | Wednesday 09 April 2025 10:19:30 +0000 (0:00:00.323) 0:00:00.323 ******* 2025-04-09 10:22:18.895745 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.895759 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.895772 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.895785 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:22:18.895798 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:22:18.895811 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:22:18.895824 | orchestrator | 2025-04-09 10:22:18.895838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-09 10:22:18.895851 | orchestrator | Wednesday 09 April 2025 10:19:31 +0000 (0:00:01.140) 0:00:01.464 ******* 2025-04-09 10:22:18.895892 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-09 10:22:18.895906 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-09 10:22:18.895919 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-09 10:22:18.895932 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-09 10:22:18.895945 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-09 10:22:18.895958 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-09 10:22:18.895971 | orchestrator | 2025-04-09 10:22:18.895984 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-09 10:22:18.895997 | orchestrator | 2025-04-09 10:22:18.896010 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-09 10:22:18.896040 | orchestrator | Wednesday 09 April 2025 10:19:33 +0000 (0:00:01.585) 0:00:03.050 ******* 2025-04-09 10:22:18.896056 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-09 10:22:18.896072 | orchestrator | 2025-04-09 10:22:18.896085 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-09 10:22:18.896098 | orchestrator | Wednesday 09 April 2025 10:19:34 +0000 (0:00:01.441) 0:00:04.492 ******* 2025-04-09 10:22:18.896113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896664 | orchestrator | 2025-04-09 10:22:18.896686 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-09 10:22:18.896700 | orchestrator | Wednesday 09 April 2025 10:19:36 +0000 (0:00:02.060) 0:00:06.555 ******* 2025-04-09 10:22:18.896712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896796 | orchestrator | 2025-04-09 10:22:18.896809 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-09 10:22:18.896822 | orchestrator | Wednesday 09 April 2025 10:19:40 +0000 (0:00:03.761) 0:00:10.316 ******* 2025-04-09 10:22:18.896834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.896934 | orchestrator | 2025-04-09 10:22:18.896947 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-09 10:22:18.896960 | orchestrator | Wednesday 09 April 2025 10:19:42 +0000 (0:00:01.993) 0:00:12.310 ******* 2025-04-09 10:22:18.896972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897087 | orchestrator | 2025-04-09 10:22:18.897106 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-09 10:22:18.897119 | orchestrator | Wednesday 09 April 2025 10:19:46 +0000 (0:00:03.831) 0:00:16.141 ******* 2025-04-09 10:22:18.897137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897201 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.897214 | orchestrator | 2025-04-09 10:22:18.897229 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-09 10:22:18.897243 | orchestrator | Wednesday 09 April 2025 10:19:48 +0000 (0:00:01.984) 0:00:18.126 ******* 2025-04-09 10:22:18.897258 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.897279 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.897387 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.897404 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:22:18.897418 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:22:18.897432 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:22:18.897446 | orchestrator | 2025-04-09 10:22:18.897460 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-09 10:22:18.897474 | orchestrator | Wednesday 09 April 2025 10:19:51 +0000 (0:00:02.949) 0:00:21.075 ******* 2025-04-09 10:22:18.897489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-09 10:22:18.897503 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-09 10:22:18.897517 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-09 10:22:18.897532 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-09 10:22:18.897546 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-09 10:22:18.897561 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-09 10:22:18.897575 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-09 10:22:18.897590 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-09 10:22:18.897609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-09 10:22:18.897622 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-09 10:22:18.897635 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-09 10:22:18.897647 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-09 10:22:18.897660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-09 10:22:18.897675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-09 10:22:18.897688 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-09 10:22:18.897701 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-09 10:22:18.897714 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-09 10:22:18.897726 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-09 10:22:18.897739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-09 10:22:18.897753 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-09 10:22:18.897766 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-09 10:22:18.897778 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-09 10:22:18.897791 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-09 10:22:18.897803 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-09 10:22:18.897816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-09 10:22:18.897828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-09 10:22:18.897849 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-09 10:22:18.897862 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-09 10:22:18.897875 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-09 10:22:18.897888 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-09 10:22:18.897900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-09 10:22:18.897913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-09 10:22:18.897926 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-09 10:22:18.897938 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-09 10:22:18.897951 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-09 10:22:18.897963 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-09 10:22:18.897976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-09 10:22:18.897989 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-09 10:22:18.898001 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-09 10:22:18.898061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-09 10:22:18.898078 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-09 10:22:18.898090 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-09 10:22:18.898103 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-09 10:22:18.898117 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-09 10:22:18.898137 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-09 10:22:18.898150 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-09 10:22:18.898162 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-09 10:22:18.898175 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-09 10:22:18.898187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-09 10:22:18.898200 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-09 10:22:18.898213 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-09 10:22:18.898225 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-09 10:22:18.898238 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-09 10:22:18.898250 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-09 10:22:18.898270 | orchestrator | 2025-04-09 10:22:18.898283 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-09 10:22:18.898342 | orchestrator | Wednesday 09 April 2025 10:20:11 +0000 (0:00:20.244) 0:00:41.319 ******* 2025-04-09 10:22:18.898358 | orchestrator | 2025-04-09 10:22:18.898370 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-09 10:22:18.898383 | orchestrator | Wednesday 09 April 2025 10:20:11 +0000 (0:00:00.062) 0:00:41.382 ******* 2025-04-09 10:22:18.898396 | orchestrator | 2025-04-09 10:22:18.898408 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-09 10:22:18.898427 | orchestrator | Wednesday 09 April 2025 10:20:11 +0000 (0:00:00.240) 0:00:41.622 ******* 2025-04-09 10:22:18.898440 | orchestrator | 2025-04-09 10:22:18.898453 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-09 10:22:18.898466 | orchestrator | Wednesday 09 April 2025 10:20:11 +0000 (0:00:00.055) 0:00:41.678 ******* 2025-04-09 10:22:18.898478 | orchestrator | 2025-04-09 10:22:18.898491 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-09 10:22:18.898504 | orchestrator | Wednesday 09 April 2025 10:20:11 +0000 (0:00:00.051) 0:00:41.730 ******* 2025-04-09 10:22:18.898516 | orchestrator | 2025-04-09 10:22:18.898529 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-09 10:22:18.898541 | orchestrator | Wednesday 09 April 2025 10:20:11 +0000 (0:00:00.051) 0:00:41.782 ******* 2025-04-09 10:22:18.898554 | orchestrator | 2025-04-09 10:22:18.898567 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-09 10:22:18.898579 | orchestrator | Wednesday 09 April 2025 10:20:12 +0000 (0:00:00.266) 0:00:42.048 ******* 2025-04-09 10:22:18.898592 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.898605 | orchestrator | ok: [testbed-node-4] 2025-04-09 10:22:18.898618 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.898630 | orchestrator | ok: [testbed-node-3] 2025-04-09 10:22:18.898643 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.898656 | orchestrator | ok: [testbed-node-5] 2025-04-09 10:22:18.898668 | orchestrator | 2025-04-09 10:22:18.898681 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-09 10:22:18.898694 | orchestrator | Wednesday 09 April 2025 10:20:14 +0000 (0:00:01.939) 0:00:43.987 ******* 2025-04-09 10:22:18.898706 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.898731 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.898744 | orchestrator | changed: [testbed-node-5] 2025-04-09 10:22:18.898757 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.898769 | orchestrator | changed: [testbed-node-3] 2025-04-09 10:22:18.898782 | orchestrator | changed: [testbed-node-4] 2025-04-09 10:22:18.898795 | orchestrator | 2025-04-09 10:22:18.898807 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-09 10:22:18.898820 | orchestrator | 2025-04-09 10:22:18.898832 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-09 10:22:18.898845 | orchestrator | Wednesday 09 April 2025 10:20:47 +0000 (0:00:33.340) 0:01:17.328 ******* 2025-04-09 10:22:18.898858 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:22:18.898871 | orchestrator | 2025-04-09 10:22:18.898883 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-09 10:22:18.898896 | orchestrator | Wednesday 09 April 2025 10:20:48 +0000 (0:00:00.600) 0:01:17.929 ******* 2025-04-09 10:22:18.898914 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:22:18.898927 | orchestrator | 2025-04-09 10:22:18.898939 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-09 10:22:18.898952 | orchestrator | Wednesday 09 April 2025 10:20:48 +0000 (0:00:00.842) 0:01:18.771 ******* 2025-04-09 10:22:18.898965 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.898984 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.898997 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.899009 | orchestrator | 2025-04-09 10:22:18.899022 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-09 10:22:18.899035 | orchestrator | Wednesday 09 April 2025 10:20:49 +0000 (0:00:00.767) 0:01:19.538 ******* 2025-04-09 10:22:18.899047 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.899060 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.899079 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.899092 | orchestrator | 2025-04-09 10:22:18.899104 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-09 10:22:18.899117 | orchestrator | Wednesday 09 April 2025 10:20:50 +0000 (0:00:00.577) 0:01:20.116 ******* 2025-04-09 10:22:18.899130 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.899142 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.899155 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.899167 | orchestrator | 2025-04-09 10:22:18.899180 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-09 10:22:18.899192 | orchestrator | Wednesday 09 April 2025 10:20:50 +0000 (0:00:00.537) 0:01:20.654 ******* 2025-04-09 10:22:18.899205 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.899217 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.899230 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.899242 | orchestrator | 2025-04-09 10:22:18.899255 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-09 10:22:18.899273 | orchestrator | Wednesday 09 April 2025 10:20:51 +0000 (0:00:00.573) 0:01:21.227 ******* 2025-04-09 10:22:18.899286 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.899340 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.899355 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.899368 | orchestrator | 2025-04-09 10:22:18.899380 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-09 10:22:18.899393 | orchestrator | Wednesday 09 April 2025 10:20:51 +0000 (0:00:00.441) 0:01:21.669 ******* 2025-04-09 10:22:18.899405 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899418 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899430 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899443 | orchestrator | 2025-04-09 10:22:18.899455 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-09 10:22:18.899468 | orchestrator | Wednesday 09 April 2025 10:20:52 +0000 (0:00:00.534) 0:01:22.203 ******* 2025-04-09 10:22:18.899480 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899493 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899505 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899518 | orchestrator | 2025-04-09 10:22:18.899530 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-09 10:22:18.899543 | orchestrator | Wednesday 09 April 2025 10:20:52 +0000 (0:00:00.530) 0:01:22.734 ******* 2025-04-09 10:22:18.899555 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899568 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899580 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899593 | orchestrator | 2025-04-09 10:22:18.899605 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-09 10:22:18.899618 | orchestrator | Wednesday 09 April 2025 10:20:53 +0000 (0:00:00.289) 0:01:23.023 ******* 2025-04-09 10:22:18.899630 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899643 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899654 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899664 | orchestrator | 2025-04-09 10:22:18.899674 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-09 10:22:18.899684 | orchestrator | Wednesday 09 April 2025 10:20:53 +0000 (0:00:00.452) 0:01:23.476 ******* 2025-04-09 10:22:18.899695 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899705 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899715 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899731 | orchestrator | 2025-04-09 10:22:18.899742 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-09 10:22:18.899752 | orchestrator | Wednesday 09 April 2025 10:20:54 +0000 (0:00:00.758) 0:01:24.234 ******* 2025-04-09 10:22:18.899762 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899772 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899783 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899793 | orchestrator | 2025-04-09 10:22:18.899803 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-09 10:22:18.899813 | orchestrator | Wednesday 09 April 2025 10:20:55 +0000 (0:00:00.936) 0:01:25.171 ******* 2025-04-09 10:22:18.899824 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899834 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899844 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899854 | orchestrator | 2025-04-09 10:22:18.899865 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-09 10:22:18.899875 | orchestrator | Wednesday 09 April 2025 10:20:55 +0000 (0:00:00.648) 0:01:25.819 ******* 2025-04-09 10:22:18.899885 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899895 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899905 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899915 | orchestrator | 2025-04-09 10:22:18.899926 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-09 10:22:18.899936 | orchestrator | Wednesday 09 April 2025 10:20:56 +0000 (0:00:00.861) 0:01:26.681 ******* 2025-04-09 10:22:18.899946 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.899956 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.899967 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.899977 | orchestrator | 2025-04-09 10:22:18.899987 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-09 10:22:18.899998 | orchestrator | Wednesday 09 April 2025 10:20:57 +0000 (0:00:01.054) 0:01:27.735 ******* 2025-04-09 10:22:18.900008 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900018 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900028 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900038 | orchestrator | 2025-04-09 10:22:18.900049 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-09 10:22:18.900059 | orchestrator | Wednesday 09 April 2025 10:20:58 +0000 (0:00:00.459) 0:01:28.195 ******* 2025-04-09 10:22:18.900069 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900080 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900098 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900109 | orchestrator | 2025-04-09 10:22:18.900120 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-09 10:22:18.900130 | orchestrator | Wednesday 09 April 2025 10:20:58 +0000 (0:00:00.355) 0:01:28.551 ******* 2025-04-09 10:22:18.900140 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900151 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900166 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900177 | orchestrator | 2025-04-09 10:22:18.900188 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-09 10:22:18.900198 | orchestrator | Wednesday 09 April 2025 10:20:59 +0000 (0:00:00.335) 0:01:28.886 ******* 2025-04-09 10:22:18.900208 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:22:18.900219 | orchestrator | 2025-04-09 10:22:18.900229 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-09 10:22:18.900239 | orchestrator | Wednesday 09 April 2025 10:20:59 +0000 (0:00:00.586) 0:01:29.473 ******* 2025-04-09 10:22:18.900250 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.900260 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.900270 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.900281 | orchestrator | 2025-04-09 10:22:18.900291 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-09 10:22:18.900322 | orchestrator | Wednesday 09 April 2025 10:21:00 +0000 (0:00:00.500) 0:01:29.974 ******* 2025-04-09 10:22:18.900333 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.900343 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.900354 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.900364 | orchestrator | 2025-04-09 10:22:18.900374 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-09 10:22:18.900385 | orchestrator | Wednesday 09 April 2025 10:21:00 +0000 (0:00:00.792) 0:01:30.766 ******* 2025-04-09 10:22:18.900395 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900405 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900415 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900426 | orchestrator | 2025-04-09 10:22:18.900436 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-09 10:22:18.900446 | orchestrator | Wednesday 09 April 2025 10:21:01 +0000 (0:00:00.401) 0:01:31.168 ******* 2025-04-09 10:22:18.900456 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900467 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900477 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900487 | orchestrator | 2025-04-09 10:22:18.900498 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-09 10:22:18.900508 | orchestrator | Wednesday 09 April 2025 10:21:01 +0000 (0:00:00.462) 0:01:31.630 ******* 2025-04-09 10:22:18.900518 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900529 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900539 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900549 | orchestrator | 2025-04-09 10:22:18.900559 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-09 10:22:18.900574 | orchestrator | Wednesday 09 April 2025 10:21:02 +0000 (0:00:00.416) 0:01:32.047 ******* 2025-04-09 10:22:18.900585 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900595 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900605 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900616 | orchestrator | 2025-04-09 10:22:18.900626 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-09 10:22:18.900636 | orchestrator | Wednesday 09 April 2025 10:21:02 +0000 (0:00:00.387) 0:01:32.435 ******* 2025-04-09 10:22:18.900646 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900657 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900667 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900678 | orchestrator | 2025-04-09 10:22:18.900688 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-09 10:22:18.900698 | orchestrator | Wednesday 09 April 2025 10:21:02 +0000 (0:00:00.321) 0:01:32.756 ******* 2025-04-09 10:22:18.900708 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.900719 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.900851 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.900866 | orchestrator | 2025-04-09 10:22:18.900877 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-09 10:22:18.900888 | orchestrator | Wednesday 09 April 2025 10:21:03 +0000 (0:00:00.394) 0:01:33.151 ******* 2025-04-09 10:22:18.900900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.900914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.900932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.900950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.900968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.900979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.900990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.901001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.901012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.901024 | orchestrator | 2025-04-09 10:22:18.901035 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-09 10:22:18.901046 | orchestrator | Wednesday 09 April 2025 10:21:04 +0000 (0:00:01.576) 0:01:34.728 ******* 2025-04-09 10:22:18.901057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.901068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.901079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902625 | orchestrator | 2025-04-09 10:22:18.902636 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-09 10:22:18.902647 | orchestrator | Wednesday 09 April 2025 10:21:09 +0000 (0:00:04.290) 0:01:39.018 ******* 2025-04-09 10:22:18.902658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.902764 | orchestrator | 2025-04-09 10:22:18.902774 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-09 10:22:18.902784 | orchestrator | Wednesday 09 April 2025 10:21:11 +0000 (0:00:02.301) 0:01:41.319 ******* 2025-04-09 10:22:18.902795 | orchestrator | 2025-04-09 10:22:18.902805 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-09 10:22:18.902815 | orchestrator | Wednesday 09 April 2025 10:21:11 +0000 (0:00:00.056) 0:01:41.376 ******* 2025-04-09 10:22:18.902825 | orchestrator | 2025-04-09 10:22:18.902836 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-09 10:22:18.902846 | orchestrator | Wednesday 09 April 2025 10:21:11 +0000 (0:00:00.155) 0:01:41.531 ******* 2025-04-09 10:22:18.902856 | orchestrator | 2025-04-09 10:22:18.902866 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-09 10:22:18.902875 | orchestrator | Wednesday 09 April 2025 10:21:11 +0000 (0:00:00.051) 0:01:41.583 ******* 2025-04-09 10:22:18.902884 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.902893 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.902902 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.902916 | orchestrator | 2025-04-09 10:22:18.902925 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-09 10:22:18.902934 | orchestrator | Wednesday 09 April 2025 10:21:19 +0000 (0:00:07.758) 0:01:49.341 ******* 2025-04-09 10:22:18.902942 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.902951 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.902959 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.902968 | orchestrator | 2025-04-09 10:22:18.902977 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-09 10:22:18.902986 | orchestrator | Wednesday 09 April 2025 10:21:27 +0000 (0:00:07.668) 0:01:57.009 ******* 2025-04-09 10:22:18.902994 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.903003 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.903012 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.903020 | orchestrator | 2025-04-09 10:22:18.903029 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-09 10:22:18.903038 | orchestrator | Wednesday 09 April 2025 10:21:34 +0000 (0:00:06.956) 0:02:03.966 ******* 2025-04-09 10:22:18.903046 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.903055 | orchestrator | 2025-04-09 10:22:18.903063 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-09 10:22:18.903072 | orchestrator | Wednesday 09 April 2025 10:21:34 +0000 (0:00:00.145) 0:02:04.111 ******* 2025-04-09 10:22:18.903081 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.903089 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.903098 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.903107 | orchestrator | 2025-04-09 10:22:18.903115 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-09 10:22:18.903124 | orchestrator | Wednesday 09 April 2025 10:21:35 +0000 (0:00:00.967) 0:02:05.079 ******* 2025-04-09 10:22:18.903133 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.903141 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.903150 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.903158 | orchestrator | 2025-04-09 10:22:18.903167 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-09 10:22:18.903176 | orchestrator | Wednesday 09 April 2025 10:21:36 +0000 (0:00:00.843) 0:02:05.923 ******* 2025-04-09 10:22:18.903184 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.903193 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.903202 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.903210 | orchestrator | 2025-04-09 10:22:18.903219 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-09 10:22:18.903231 | orchestrator | Wednesday 09 April 2025 10:21:36 +0000 (0:00:00.727) 0:02:06.650 ******* 2025-04-09 10:22:18.903240 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.903249 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.903257 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.903274 | orchestrator | 2025-04-09 10:22:18.903283 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-09 10:22:18.903292 | orchestrator | Wednesday 09 April 2025 10:21:37 +0000 (0:00:00.676) 0:02:07.327 ******* 2025-04-09 10:22:18.903315 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.903325 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.903337 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.903346 | orchestrator | 2025-04-09 10:22:18.903355 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-09 10:22:18.903364 | orchestrator | Wednesday 09 April 2025 10:21:38 +0000 (0:00:01.112) 0:02:08.439 ******* 2025-04-09 10:22:18.903372 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.903381 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.903389 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.903398 | orchestrator | 2025-04-09 10:22:18.903407 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-09 10:22:18.903415 | orchestrator | Wednesday 09 April 2025 10:21:39 +0000 (0:00:01.187) 0:02:09.627 ******* 2025-04-09 10:22:18.903429 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.903438 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.903447 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.903455 | orchestrator | 2025-04-09 10:22:18.903464 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-09 10:22:18.903473 | orchestrator | Wednesday 09 April 2025 10:21:40 +0000 (0:00:00.354) 0:02:09.981 ******* 2025-04-09 10:22:18.903482 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903495 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903504 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903527 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903536 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903545 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903553 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903567 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903582 | orchestrator | 2025-04-09 10:22:18.903590 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-09 10:22:18.903599 | orchestrator | Wednesday 09 April 2025 10:21:41 +0000 (0:00:01.656) 0:02:11.637 ******* 2025-04-09 10:22:18.903608 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903617 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903634 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903660 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903695 | orchestrator | 2025-04-09 10:22:18.903704 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-09 10:22:18.903713 | orchestrator | Wednesday 09 April 2025 10:21:47 +0000 (0:00:05.377) 0:02:17.015 ******* 2025-04-09 10:22:18.903727 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903736 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903745 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903754 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903762 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903780 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903789 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-09 10:22:18.903807 | orchestrator | 2025-04-09 10:22:18.903815 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-09 10:22:18.903829 | orchestrator | Wednesday 09 April 2025 10:21:50 +0000 (0:00:03.805) 0:02:20.821 ******* 2025-04-09 10:22:18.903838 | orchestrator | 2025-04-09 10:22:18.903846 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-09 10:22:18.903855 | orchestrator | Wednesday 09 April 2025 10:21:51 +0000 (0:00:00.284) 0:02:21.105 ******* 2025-04-09 10:22:18.903864 | orchestrator | 2025-04-09 10:22:18.903872 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-09 10:22:18.903881 | orchestrator | Wednesday 09 April 2025 10:21:51 +0000 (0:00:00.083) 0:02:21.188 ******* 2025-04-09 10:22:18.903890 | orchestrator | 2025-04-09 10:22:18.903898 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-09 10:22:18.903907 | orchestrator | Wednesday 09 April 2025 10:21:51 +0000 (0:00:00.058) 0:02:21.246 ******* 2025-04-09 10:22:18.903915 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.903924 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.903933 | orchestrator | 2025-04-09 10:22:18.903945 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-09 10:22:18.903954 | orchestrator | Wednesday 09 April 2025 10:21:57 +0000 (0:00:06.581) 0:02:27.828 ******* 2025-04-09 10:22:18.903963 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.903971 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.903980 | orchestrator | 2025-04-09 10:22:18.903989 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-09 10:22:18.903997 | orchestrator | Wednesday 09 April 2025 10:22:04 +0000 (0:00:06.357) 0:02:34.185 ******* 2025-04-09 10:22:18.904006 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:22:18.904015 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:22:18.904023 | orchestrator | 2025-04-09 10:22:18.904032 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-09 10:22:18.904041 | orchestrator | Wednesday 09 April 2025 10:22:11 +0000 (0:00:06.915) 0:02:41.101 ******* 2025-04-09 10:22:18.904049 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:22:18.904058 | orchestrator | 2025-04-09 10:22:18.904067 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-09 10:22:18.904075 | orchestrator | Wednesday 09 April 2025 10:22:11 +0000 (0:00:00.189) 0:02:41.290 ******* 2025-04-09 10:22:18.904084 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.904093 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.904101 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.904110 | orchestrator | 2025-04-09 10:22:18.904119 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-09 10:22:18.904127 | orchestrator | Wednesday 09 April 2025 10:22:12 +0000 (0:00:00.940) 0:02:42.231 ******* 2025-04-09 10:22:18.904136 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.904145 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.904153 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.904162 | orchestrator | 2025-04-09 10:22:18.904171 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-09 10:22:18.904183 | orchestrator | Wednesday 09 April 2025 10:22:13 +0000 (0:00:00.709) 0:02:42.941 ******* 2025-04-09 10:22:18.904192 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.904201 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.904209 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.904218 | orchestrator | 2025-04-09 10:22:18.904227 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-09 10:22:18.904235 | orchestrator | Wednesday 09 April 2025 10:22:14 +0000 (0:00:01.023) 0:02:43.965 ******* 2025-04-09 10:22:18.904244 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:22:18.904253 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:22:18.904261 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:22:18.904270 | orchestrator | 2025-04-09 10:22:18.904278 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-09 10:22:18.904287 | orchestrator | Wednesday 09 April 2025 10:22:14 +0000 (0:00:00.788) 0:02:44.753 ******* 2025-04-09 10:22:18.904311 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.904321 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.904330 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.904339 | orchestrator | 2025-04-09 10:22:18.904348 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-09 10:22:18.904356 | orchestrator | Wednesday 09 April 2025 10:22:15 +0000 (0:00:00.732) 0:02:45.486 ******* 2025-04-09 10:22:18.904365 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:22:18.904374 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:22:18.904382 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:22:18.904391 | orchestrator | 2025-04-09 10:22:18.904400 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:22:18.904408 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-09 10:22:18.904418 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-09 10:22:18.904427 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-09 10:22:18.904435 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:22:18.904444 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:22:18.904453 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-09 10:22:18.904462 | orchestrator | 2025-04-09 10:22:18.904471 | orchestrator | 2025-04-09 10:22:18.904479 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:22:18.904488 | orchestrator | Wednesday 09 April 2025 10:22:16 +0000 (0:00:01.295) 0:02:46.782 ******* 2025-04-09 10:22:18.904497 | orchestrator | =============================================================================== 2025-04-09 10:22:18.904505 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.34s 2025-04-09 10:22:18.904514 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.24s 2025-04-09 10:22:18.904522 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.34s 2025-04-09 10:22:18.904531 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.03s 2025-04-09 10:22:18.904540 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.87s 2025-04-09 10:22:18.904548 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.38s 2025-04-09 10:22:18.904557 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.29s 2025-04-09 10:22:18.904570 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.83s 2025-04-09 10:22:21.949633 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.81s 2025-04-09 10:22:21.949772 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.76s 2025-04-09 10:22:21.949790 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.95s 2025-04-09 10:22:21.949803 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.30s 2025-04-09 10:22:21.949816 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.06s 2025-04-09 10:22:21.949829 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.99s 2025-04-09 10:22:21.949841 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.98s 2025-04-09 10:22:21.949854 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.94s 2025-04-09 10:22:21.949867 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.66s 2025-04-09 10:22:21.949910 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.59s 2025-04-09 10:22:21.949923 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2025-04-09 10:22:21.949936 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.44s 2025-04-09 10:22:21.949949 | orchestrator | 2025-04-09 10:22:18 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:21.949963 | orchestrator | 2025-04-09 10:22:18 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:21.949976 | orchestrator | 2025-04-09 10:22:18 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:21.950007 | orchestrator | 2025-04-09 10:22:21 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:21.951634 | orchestrator | 2025-04-09 10:22:21 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:25.010539 | orchestrator | 2025-04-09 10:22:21 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:25.010708 | orchestrator | 2025-04-09 10:22:25 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:25.012894 | orchestrator | 2025-04-09 10:22:25 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:25.013215 | orchestrator | 2025-04-09 10:22:25 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:28.075131 | orchestrator | 2025-04-09 10:22:28 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:28.077386 | orchestrator | 2025-04-09 10:22:28 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:31.125846 | orchestrator | 2025-04-09 10:22:28 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:31.126011 | orchestrator | 2025-04-09 10:22:31 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:31.126753 | orchestrator | 2025-04-09 10:22:31 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:34.170848 | orchestrator | 2025-04-09 10:22:31 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:34.171010 | orchestrator | 2025-04-09 10:22:34 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:34.171499 | orchestrator | 2025-04-09 10:22:34 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:37.216695 | orchestrator | 2025-04-09 10:22:34 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:37.216878 | orchestrator | 2025-04-09 10:22:37 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:37.217655 | orchestrator | 2025-04-09 10:22:37 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:40.259932 | orchestrator | 2025-04-09 10:22:37 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:40.260064 | orchestrator | 2025-04-09 10:22:40 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:40.263387 | orchestrator | 2025-04-09 10:22:40 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:43.319913 | orchestrator | 2025-04-09 10:22:40 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:43.320058 | orchestrator | 2025-04-09 10:22:43 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:43.320645 | orchestrator | 2025-04-09 10:22:43 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:43.321153 | orchestrator | 2025-04-09 10:22:43 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:46.372717 | orchestrator | 2025-04-09 10:22:46 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:46.373593 | orchestrator | 2025-04-09 10:22:46 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:49.428417 | orchestrator | 2025-04-09 10:22:46 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:49.428543 | orchestrator | 2025-04-09 10:22:49 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:49.430169 | orchestrator | 2025-04-09 10:22:49 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:52.489565 | orchestrator | 2025-04-09 10:22:49 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:52.489702 | orchestrator | 2025-04-09 10:22:52 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:52.490684 | orchestrator | 2025-04-09 10:22:52 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:52.491551 | orchestrator | 2025-04-09 10:22:52 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:55.550552 | orchestrator | 2025-04-09 10:22:55 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:22:55.551695 | orchestrator | 2025-04-09 10:22:55 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:22:58.598136 | orchestrator | 2025-04-09 10:22:55 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:22:58.598267 | orchestrator | 2025-04-09 10:22:58 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:01.653947 | orchestrator | 2025-04-09 10:22:58 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:01.654119 | orchestrator | 2025-04-09 10:22:58 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:01.654159 | orchestrator | 2025-04-09 10:23:01 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:01.655481 | orchestrator | 2025-04-09 10:23:01 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:04.704291 | orchestrator | 2025-04-09 10:23:01 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:04.704452 | orchestrator | 2025-04-09 10:23:04 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:04.705125 | orchestrator | 2025-04-09 10:23:04 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:04.705238 | orchestrator | 2025-04-09 10:23:04 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:07.769852 | orchestrator | 2025-04-09 10:23:07 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:07.776007 | orchestrator | 2025-04-09 10:23:07 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:10.839407 | orchestrator | 2025-04-09 10:23:07 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:10.839530 | orchestrator | 2025-04-09 10:23:10 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:10.840507 | orchestrator | 2025-04-09 10:23:10 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:13.887745 | orchestrator | 2025-04-09 10:23:10 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:13.887883 | orchestrator | 2025-04-09 10:23:13 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:16.936563 | orchestrator | 2025-04-09 10:23:13 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:16.936712 | orchestrator | 2025-04-09 10:23:13 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:16.936750 | orchestrator | 2025-04-09 10:23:16 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:16.936893 | orchestrator | 2025-04-09 10:23:16 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:19.983659 | orchestrator | 2025-04-09 10:23:16 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:19.983789 | orchestrator | 2025-04-09 10:23:19 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:19.985879 | orchestrator | 2025-04-09 10:23:19 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:19.986481 | orchestrator | 2025-04-09 10:23:19 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:23.042953 | orchestrator | 2025-04-09 10:23:23 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:26.089185 | orchestrator | 2025-04-09 10:23:23 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:26.089301 | orchestrator | 2025-04-09 10:23:23 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:26.089390 | orchestrator | 2025-04-09 10:23:26 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:26.090566 | orchestrator | 2025-04-09 10:23:26 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:29.136060 | orchestrator | 2025-04-09 10:23:26 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:29.136190 | orchestrator | 2025-04-09 10:23:29 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:29.136377 | orchestrator | 2025-04-09 10:23:29 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:32.186782 | orchestrator | 2025-04-09 10:23:29 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:32.186994 | orchestrator | 2025-04-09 10:23:32 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:35.243748 | orchestrator | 2025-04-09 10:23:32 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:35.243860 | orchestrator | 2025-04-09 10:23:32 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:35.243894 | orchestrator | 2025-04-09 10:23:35 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:35.248789 | orchestrator | 2025-04-09 10:23:35 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:35.248970 | orchestrator | 2025-04-09 10:23:35 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:38.299238 | orchestrator | 2025-04-09 10:23:38 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:38.303184 | orchestrator | 2025-04-09 10:23:38 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:41.341861 | orchestrator | 2025-04-09 10:23:38 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:41.342104 | orchestrator | 2025-04-09 10:23:41 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:44.376290 | orchestrator | 2025-04-09 10:23:41 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:44.376431 | orchestrator | 2025-04-09 10:23:41 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:44.376463 | orchestrator | 2025-04-09 10:23:44 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:44.377079 | orchestrator | 2025-04-09 10:23:44 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:44.377170 | orchestrator | 2025-04-09 10:23:44 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:47.431187 | orchestrator | 2025-04-09 10:23:47 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:47.434222 | orchestrator | 2025-04-09 10:23:47 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:50.491763 | orchestrator | 2025-04-09 10:23:47 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:50.491899 | orchestrator | 2025-04-09 10:23:50 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:50.493138 | orchestrator | 2025-04-09 10:23:50 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:50.493479 | orchestrator | 2025-04-09 10:23:50 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:53.550493 | orchestrator | 2025-04-09 10:23:53 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:53.551708 | orchestrator | 2025-04-09 10:23:53 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:53.552191 | orchestrator | 2025-04-09 10:23:53 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:56.608207 | orchestrator | 2025-04-09 10:23:56 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:56.610166 | orchestrator | 2025-04-09 10:23:56 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:23:56.610644 | orchestrator | 2025-04-09 10:23:56 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:23:59.665836 | orchestrator | 2025-04-09 10:23:59 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:23:59.668266 | orchestrator | 2025-04-09 10:23:59 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:02.733021 | orchestrator | 2025-04-09 10:23:59 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:02.733152 | orchestrator | 2025-04-09 10:24:02 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:02.734289 | orchestrator | 2025-04-09 10:24:02 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:02.734669 | orchestrator | 2025-04-09 10:24:02 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:05.789447 | orchestrator | 2025-04-09 10:24:05 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:05.790489 | orchestrator | 2025-04-09 10:24:05 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:08.841110 | orchestrator | 2025-04-09 10:24:05 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:08.841239 | orchestrator | 2025-04-09 10:24:08 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:08.844122 | orchestrator | 2025-04-09 10:24:08 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:08.844466 | orchestrator | 2025-04-09 10:24:08 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:11.884559 | orchestrator | 2025-04-09 10:24:11 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:11.888293 | orchestrator | 2025-04-09 10:24:11 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:11.888565 | orchestrator | 2025-04-09 10:24:11 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:14.945788 | orchestrator | 2025-04-09 10:24:14 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:14.948281 | orchestrator | 2025-04-09 10:24:14 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:18.005216 | orchestrator | 2025-04-09 10:24:14 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:18.005382 | orchestrator | 2025-04-09 10:24:18 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:18.006684 | orchestrator | 2025-04-09 10:24:18 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:21.061620 | orchestrator | 2025-04-09 10:24:18 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:21.061787 | orchestrator | 2025-04-09 10:24:21 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:21.066294 | orchestrator | 2025-04-09 10:24:21 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:24.114590 | orchestrator | 2025-04-09 10:24:21 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:24.114716 | orchestrator | 2025-04-09 10:24:24 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:24.115536 | orchestrator | 2025-04-09 10:24:24 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:27.154291 | orchestrator | 2025-04-09 10:24:24 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:27.154465 | orchestrator | 2025-04-09 10:24:27 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:27.155148 | orchestrator | 2025-04-09 10:24:27 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:30.202925 | orchestrator | 2025-04-09 10:24:27 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:30.203056 | orchestrator | 2025-04-09 10:24:30 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:30.203236 | orchestrator | 2025-04-09 10:24:30 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:33.263649 | orchestrator | 2025-04-09 10:24:30 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:33.263778 | orchestrator | 2025-04-09 10:24:33 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:33.265230 | orchestrator | 2025-04-09 10:24:33 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:33.265957 | orchestrator | 2025-04-09 10:24:33 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:36.322639 | orchestrator | 2025-04-09 10:24:36 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:36.326660 | orchestrator | 2025-04-09 10:24:36 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:39.388470 | orchestrator | 2025-04-09 10:24:36 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:39.388608 | orchestrator | 2025-04-09 10:24:39 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:39.393165 | orchestrator | 2025-04-09 10:24:39 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:39.393807 | orchestrator | 2025-04-09 10:24:39 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:42.459833 | orchestrator | 2025-04-09 10:24:42 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:42.461509 | orchestrator | 2025-04-09 10:24:42 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:42.462100 | orchestrator | 2025-04-09 10:24:42 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:45.525239 | orchestrator | 2025-04-09 10:24:45 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:45.526088 | orchestrator | 2025-04-09 10:24:45 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:45.526361 | orchestrator | 2025-04-09 10:24:45 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:48.594969 | orchestrator | 2025-04-09 10:24:48 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:48.596106 | orchestrator | 2025-04-09 10:24:48 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:51.652461 | orchestrator | 2025-04-09 10:24:48 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:51.652595 | orchestrator | 2025-04-09 10:24:51 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:51.655712 | orchestrator | 2025-04-09 10:24:51 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:54.711666 | orchestrator | 2025-04-09 10:24:51 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:54.711804 | orchestrator | 2025-04-09 10:24:54 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:54.715996 | orchestrator | 2025-04-09 10:24:54 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:24:57.764831 | orchestrator | 2025-04-09 10:24:54 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:24:57.764951 | orchestrator | 2025-04-09 10:24:57 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:24:57.768248 | orchestrator | 2025-04-09 10:24:57 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:00.828694 | orchestrator | 2025-04-09 10:24:57 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:00.828840 | orchestrator | 2025-04-09 10:25:00 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:00.829571 | orchestrator | 2025-04-09 10:25:00 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:00.829888 | orchestrator | 2025-04-09 10:25:00 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:03.872167 | orchestrator | 2025-04-09 10:25:03 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:03.874815 | orchestrator | 2025-04-09 10:25:03 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:06.916827 | orchestrator | 2025-04-09 10:25:03 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:06.916961 | orchestrator | 2025-04-09 10:25:06 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:06.918991 | orchestrator | 2025-04-09 10:25:06 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:06.919029 | orchestrator | 2025-04-09 10:25:06 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:09.963623 | orchestrator | 2025-04-09 10:25:09 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:09.964751 | orchestrator | 2025-04-09 10:25:09 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:13.018738 | orchestrator | 2025-04-09 10:25:09 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:13.018861 | orchestrator | 2025-04-09 10:25:13 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:13.021941 | orchestrator | 2025-04-09 10:25:13 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:16.063029 | orchestrator | 2025-04-09 10:25:13 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:16.063157 | orchestrator | 2025-04-09 10:25:16 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:16.064289 | orchestrator | 2025-04-09 10:25:16 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:19.123424 | orchestrator | 2025-04-09 10:25:16 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:19.123549 | orchestrator | 2025-04-09 10:25:19 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:19.124529 | orchestrator | 2025-04-09 10:25:19 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:22.173472 | orchestrator | 2025-04-09 10:25:19 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:22.173601 | orchestrator | 2025-04-09 10:25:22 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:22.175872 | orchestrator | 2025-04-09 10:25:22 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:25.216749 | orchestrator | 2025-04-09 10:25:22 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:25.216867 | orchestrator | 2025-04-09 10:25:25 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:25.219493 | orchestrator | 2025-04-09 10:25:25 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:28.272171 | orchestrator | 2025-04-09 10:25:25 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:28.272303 | orchestrator | 2025-04-09 10:25:28 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:28.275073 | orchestrator | 2025-04-09 10:25:28 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:31.332312 | orchestrator | 2025-04-09 10:25:28 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:31.332510 | orchestrator | 2025-04-09 10:25:31 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:31.333597 | orchestrator | 2025-04-09 10:25:31 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:31.334126 | orchestrator | 2025-04-09 10:25:31 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:34.372881 | orchestrator | 2025-04-09 10:25:34 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:37.420916 | orchestrator | 2025-04-09 10:25:34 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:37.421195 | orchestrator | 2025-04-09 10:25:34 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:37.421239 | orchestrator | 2025-04-09 10:25:37 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:40.468905 | orchestrator | 2025-04-09 10:25:37 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:40.469026 | orchestrator | 2025-04-09 10:25:37 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:40.469063 | orchestrator | 2025-04-09 10:25:40 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:43.527867 | orchestrator | 2025-04-09 10:25:40 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:43.527971 | orchestrator | 2025-04-09 10:25:40 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:43.528031 | orchestrator | 2025-04-09 10:25:43 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:43.531726 | orchestrator | 2025-04-09 10:25:43 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state STARTED 2025-04-09 10:25:46.582682 | orchestrator | 2025-04-09 10:25:43 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:46.582814 | orchestrator | 2025-04-09 10:25:46 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:46.587391 | orchestrator | 2025-04-09 10:25:46 | INFO  | Task a679efa9-250d-48dc-ab75-6b36cc399adf is in state SUCCESS 2025-04-09 10:25:46.588842 | orchestrator | 2025-04-09 10:25:46.588883 | orchestrator | 2025-04-09 10:25:46.588900 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-09 10:25:46.588916 | orchestrator | 2025-04-09 10:25:46.588948 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-09 10:25:46.588963 | orchestrator | Wednesday 09 April 2025 10:17:48 +0000 (0:00:00.548) 0:00:00.548 ******* 2025-04-09 10:25:46.589043 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.589067 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.589083 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.589099 | orchestrator | 2025-04-09 10:25:46.589419 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-09 10:25:46.589436 | orchestrator | Wednesday 09 April 2025 10:17:49 +0000 (0:00:01.403) 0:00:01.951 ******* 2025-04-09 10:25:46.589453 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-09 10:25:46.589469 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-09 10:25:46.589494 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-09 10:25:46.589510 | orchestrator | 2025-04-09 10:25:46.589525 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-09 10:25:46.589540 | orchestrator | 2025-04-09 10:25:46.589555 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-09 10:25:46.589728 | orchestrator | Wednesday 09 April 2025 10:17:51 +0000 (0:00:01.633) 0:00:03.585 ******* 2025-04-09 10:25:46.591482 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.591526 | orchestrator | 2025-04-09 10:25:46.591542 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-09 10:25:46.591557 | orchestrator | Wednesday 09 April 2025 10:17:52 +0000 (0:00:01.464) 0:00:05.049 ******* 2025-04-09 10:25:46.591572 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.591588 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.591603 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.591618 | orchestrator | 2025-04-09 10:25:46.591633 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-09 10:25:46.591648 | orchestrator | Wednesday 09 April 2025 10:17:54 +0000 (0:00:02.271) 0:00:07.320 ******* 2025-04-09 10:25:46.591664 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.591679 | orchestrator | 2025-04-09 10:25:46.591693 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-09 10:25:46.591709 | orchestrator | Wednesday 09 April 2025 10:17:56 +0000 (0:00:01.231) 0:00:08.551 ******* 2025-04-09 10:25:46.591723 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.591738 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.592152 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.592170 | orchestrator | 2025-04-09 10:25:46.592184 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-09 10:25:46.592198 | orchestrator | Wednesday 09 April 2025 10:17:58 +0000 (0:00:01.832) 0:00:10.384 ******* 2025-04-09 10:25:46.592213 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-09 10:25:46.592227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-09 10:25:46.592262 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-09 10:25:46.592277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-09 10:25:46.592291 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-09 10:25:46.592305 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-09 10:25:46.592319 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-09 10:25:46.592335 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-09 10:25:46.592390 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-09 10:25:46.592406 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-09 10:25:46.592420 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-09 10:25:46.592434 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-09 10:25:46.592449 | orchestrator | 2025-04-09 10:25:46.592516 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-09 10:25:46.592532 | orchestrator | Wednesday 09 April 2025 10:18:01 +0000 (0:00:03.667) 0:00:14.052 ******* 2025-04-09 10:25:46.592610 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-09 10:25:46.592658 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-09 10:25:46.592673 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-09 10:25:46.592687 | orchestrator | 2025-04-09 10:25:46.592701 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-09 10:25:46.593284 | orchestrator | Wednesday 09 April 2025 10:18:03 +0000 (0:00:01.538) 0:00:15.590 ******* 2025-04-09 10:25:46.593306 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-09 10:25:46.593321 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-09 10:25:46.593335 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-09 10:25:46.593374 | orchestrator | 2025-04-09 10:25:46.593389 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-09 10:25:46.593403 | orchestrator | Wednesday 09 April 2025 10:18:05 +0000 (0:00:01.923) 0:00:17.514 ******* 2025-04-09 10:25:46.593418 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-09 10:25:46.593495 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.593839 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-09 10:25:46.593866 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.593882 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-09 10:25:46.593896 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.593911 | orchestrator | 2025-04-09 10:25:46.593926 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-09 10:25:46.593941 | orchestrator | Wednesday 09 April 2025 10:18:06 +0000 (0:00:01.182) 0:00:18.696 ******* 2025-04-09 10:25:46.593957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.593980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.594047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.594066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.594082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.594179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.594203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.594220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.596205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.596237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.596271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.596287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.596300 | orchestrator | 2025-04-09 10:25:46.596314 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-09 10:25:46.596328 | orchestrator | Wednesday 09 April 2025 10:18:10 +0000 (0:00:03.724) 0:00:22.420 ******* 2025-04-09 10:25:46.596479 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.596496 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.596556 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.596571 | orchestrator | 2025-04-09 10:25:46.596583 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-09 10:25:46.596596 | orchestrator | Wednesday 09 April 2025 10:18:12 +0000 (0:00:02.219) 0:00:24.639 ******* 2025-04-09 10:25:46.596619 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-09 10:25:46.596632 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-09 10:25:46.596645 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-09 10:25:46.596658 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-09 10:25:46.596670 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-09 10:25:46.596683 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-09 10:25:46.596695 | orchestrator | 2025-04-09 10:25:46.596719 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-09 10:25:46.596732 | orchestrator | Wednesday 09 April 2025 10:18:15 +0000 (0:00:02.982) 0:00:27.622 ******* 2025-04-09 10:25:46.596745 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.596757 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.596777 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.596790 | orchestrator | 2025-04-09 10:25:46.596804 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-09 10:25:46.596818 | orchestrator | Wednesday 09 April 2025 10:18:17 +0000 (0:00:02.066) 0:00:29.688 ******* 2025-04-09 10:25:46.596832 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.596847 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.596860 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.596874 | orchestrator | 2025-04-09 10:25:46.596897 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-09 10:25:46.596911 | orchestrator | Wednesday 09 April 2025 10:18:20 +0000 (0:00:03.555) 0:00:33.243 ******* 2025-04-09 10:25:46.596925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.596938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.596951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.596963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.596982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.597001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.597014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.597026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.597082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597094 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.597130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.597143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597160 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.597227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597241 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.597251 | orchestrator | 2025-04-09 10:25:46.597295 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-09 10:25:46.597306 | orchestrator | Wednesday 09 April 2025 10:18:24 +0000 (0:00:03.592) 0:00:36.835 ******* 2025-04-09 10:25:46.597317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.597449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.597460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.597471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597509 | orchestrator | 2025-04-09 10:25:46.597520 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-09 10:25:46.597531 | orchestrator | Wednesday 09 April 2025 10:18:30 +0000 (0:00:06.368) 0:00:43.204 ******* 2025-04-09 10:25:46.597547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.597629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.597640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.597665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.597687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.597703 | orchestrator | 2025-04-09 10:25:46.597714 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-09 10:25:46.597724 | orchestrator | Wednesday 09 April 2025 10:18:34 +0000 (0:00:03.945) 0:00:47.149 ******* 2025-04-09 10:25:46.597735 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-09 10:25:46.597746 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-09 10:25:46.597805 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-09 10:25:46.597818 | orchestrator | 2025-04-09 10:25:46.597828 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-09 10:25:46.597839 | orchestrator | Wednesday 09 April 2025 10:18:37 +0000 (0:00:02.402) 0:00:49.552 ******* 2025-04-09 10:25:46.597849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-09 10:25:46.597913 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-09 10:25:46.597929 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-09 10:25:46.597940 | orchestrator | 2025-04-09 10:25:46.597950 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-09 10:25:46.597961 | orchestrator | Wednesday 09 April 2025 10:18:42 +0000 (0:00:05.698) 0:00:55.250 ******* 2025-04-09 10:25:46.597971 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.597981 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.597992 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.598002 | orchestrator | 2025-04-09 10:25:46.598012 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-09 10:25:46.598062 | orchestrator | Wednesday 09 April 2025 10:18:45 +0000 (0:00:02.911) 0:00:58.162 ******* 2025-04-09 10:25:46.598073 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-09 10:25:46.598085 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-09 10:25:46.598096 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-09 10:25:46.598107 | orchestrator | 2025-04-09 10:25:46.598117 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-09 10:25:46.598127 | orchestrator | Wednesday 09 April 2025 10:18:49 +0000 (0:00:03.571) 0:01:01.733 ******* 2025-04-09 10:25:46.598138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-09 10:25:46.598149 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-09 10:25:46.598160 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-09 10:25:46.598170 | orchestrator | 2025-04-09 10:25:46.598180 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-09 10:25:46.598190 | orchestrator | Wednesday 09 April 2025 10:18:52 +0000 (0:00:03.562) 0:01:05.296 ******* 2025-04-09 10:25:46.598201 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-09 10:25:46.598211 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-09 10:25:46.598228 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-09 10:25:46.598238 | orchestrator | 2025-04-09 10:25:46.598249 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-09 10:25:46.598259 | orchestrator | Wednesday 09 April 2025 10:18:55 +0000 (0:00:02.243) 0:01:07.539 ******* 2025-04-09 10:25:46.598269 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-09 10:25:46.598279 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-09 10:25:46.598321 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-09 10:25:46.598331 | orchestrator | 2025-04-09 10:25:46.598355 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-09 10:25:46.598366 | orchestrator | Wednesday 09 April 2025 10:18:57 +0000 (0:00:02.379) 0:01:09.918 ******* 2025-04-09 10:25:46.598432 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.598444 | orchestrator | 2025-04-09 10:25:46.598454 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-09 10:25:46.598465 | orchestrator | Wednesday 09 April 2025 10:18:58 +0000 (0:00:00.934) 0:01:10.853 ******* 2025-04-09 10:25:46.598476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.598487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.598510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.598522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.598562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.598581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.598592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.598603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.598613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.598624 | orchestrator | 2025-04-09 10:25:46.598634 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-09 10:25:46.598645 | orchestrator | Wednesday 09 April 2025 10:19:01 +0000 (0:00:03.278) 0:01:14.132 ******* 2025-04-09 10:25:46.598666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.598677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.598693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.598704 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.598714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.598725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.598736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.598747 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.598769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.598780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.598796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.598807 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.598817 | orchestrator | 2025-04-09 10:25:46.598827 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-09 10:25:46.598838 | orchestrator | Wednesday 09 April 2025 10:19:02 +0000 (0:00:00.844) 0:01:14.976 ******* 2025-04-09 10:25:46.598848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.598859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.598870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.598880 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.598894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.598912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-09 10:25:46.598929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.598940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.598951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-09 10:25:46.598961 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.598972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-09 10:25:46.598982 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.599022 | orchestrator | 2025-04-09 10:25:46.599034 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-09 10:25:46.599045 | orchestrator | Wednesday 09 April 2025 10:19:04 +0000 (0:00:01.452) 0:01:16.428 ******* 2025-04-09 10:25:46.599055 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-09 10:25:46.599066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-09 10:25:46.599076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-09 10:25:46.599086 | orchestrator | 2025-04-09 10:25:46.599097 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-09 10:25:46.599107 | orchestrator | Wednesday 09 April 2025 10:19:06 +0000 (0:00:02.259) 0:01:18.687 ******* 2025-04-09 10:25:46.599117 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-09 10:25:46.599127 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-09 10:25:46.599138 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-09 10:25:46.599148 | orchestrator | 2025-04-09 10:25:46.599163 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-09 10:25:46.599178 | orchestrator | Wednesday 09 April 2025 10:19:09 +0000 (0:00:03.267) 0:01:21.955 ******* 2025-04-09 10:25:46.599189 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-09 10:25:46.599207 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-09 10:25:46.599219 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-09 10:25:46.599229 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-09 10:25:46.599239 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.599250 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-09 10:25:46.599260 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.599270 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-09 10:25:46.599280 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.599291 | orchestrator | 2025-04-09 10:25:46.599301 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-09 10:25:46.599312 | orchestrator | Wednesday 09 April 2025 10:19:15 +0000 (0:00:06.255) 0:01:28.211 ******* 2025-04-09 10:25:46.599322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.599334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.599443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-09 10:25:46.599456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.599467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.599499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-09 10:25:46.599511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.599540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.599582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.599595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.599606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-09 10:25:46.599623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720', '__omit_place_holder__e0ebc61774108e28afc0bbcbd9e6b582b0b2f720'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-09 10:25:46.599634 | orchestrator | 2025-04-09 10:25:46.599649 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-09 10:25:46.599660 | orchestrator | Wednesday 09 April 2025 10:19:18 +0000 (0:00:02.758) 0:01:30.970 ******* 2025-04-09 10:25:46.599670 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.599681 | orchestrator | 2025-04-09 10:25:46.599691 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-09 10:25:46.599702 | orchestrator | Wednesday 09 April 2025 10:19:19 +0000 (0:00:00.891) 0:01:31.861 ******* 2025-04-09 10:25:46.599713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-09 10:25:46.599734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.599745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-09 10:25:46.599774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.599800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-09 10:25:46.599840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.599857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599879 | orchestrator | 2025-04-09 10:25:46.599889 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-09 10:25:46.599900 | orchestrator | Wednesday 09 April 2025 10:19:26 +0000 (0:00:06.917) 0:01:38.778 ******* 2025-04-09 10:25:46.599920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-09 10:25:46.599930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.599939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.599964 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.599973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-09 10:25:46.599987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-09 10:25:46.599997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.600012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.600021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600044 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.600053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600071 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.600080 | orchestrator | 2025-04-09 10:25:46.600088 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-09 10:25:46.600097 | orchestrator | Wednesday 09 April 2025 10:19:28 +0000 (0:00:02.125) 0:01:40.904 ******* 2025-04-09 10:25:46.600106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-09 10:25:46.600119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-09 10:25:46.600129 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.600138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-09 10:25:46.600147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-09 10:25:46.600156 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.600164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-09 10:25:46.600173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-09 10:25:46.600182 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.600191 | orchestrator | 2025-04-09 10:25:46.600199 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-09 10:25:46.600208 | orchestrator | Wednesday 09 April 2025 10:19:30 +0000 (0:00:02.118) 0:01:43.023 ******* 2025-04-09 10:25:46.600217 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.600225 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.600234 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.600243 | orchestrator | 2025-04-09 10:25:46.600251 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-09 10:25:46.600260 | orchestrator | Wednesday 09 April 2025 10:19:32 +0000 (0:00:01.728) 0:01:44.751 ******* 2025-04-09 10:25:46.600273 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.600282 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.600290 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.600299 | orchestrator | 2025-04-09 10:25:46.600308 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-09 10:25:46.600317 | orchestrator | Wednesday 09 April 2025 10:19:35 +0000 (0:00:03.154) 0:01:47.905 ******* 2025-04-09 10:25:46.600325 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.600334 | orchestrator | 2025-04-09 10:25:46.600357 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-09 10:25:46.600367 | orchestrator | Wednesday 09 April 2025 10:19:36 +0000 (0:00:01.289) 0:01:49.195 ******* 2025-04-09 10:25:46.600376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.600393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.600436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.600465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600488 | orchestrator | 2025-04-09 10:25:46.600497 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-09 10:25:46.600506 | orchestrator | Wednesday 09 April 2025 10:19:44 +0000 (0:00:07.428) 0:01:56.624 ******* 2025-04-09 10:25:46.600524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.600534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600552 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.600561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.600574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600603 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.600612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.600621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.600639 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.600648 | orchestrator | 2025-04-09 10:25:46.600657 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-09 10:25:46.600666 | orchestrator | Wednesday 09 April 2025 10:19:45 +0000 (0:00:01.342) 0:01:57.966 ******* 2025-04-09 10:25:46.600675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-09 10:25:46.600688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-09 10:25:46.600697 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.600706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-09 10:25:46.600724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-09 10:25:46.600734 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.600742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-09 10:25:46.600751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-09 10:25:46.600760 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.600769 | orchestrator | 2025-04-09 10:25:46.600777 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-09 10:25:46.600786 | orchestrator | Wednesday 09 April 2025 10:19:47 +0000 (0:00:02.043) 0:02:00.009 ******* 2025-04-09 10:25:46.600795 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.600804 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.600812 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.600821 | orchestrator | 2025-04-09 10:25:46.600830 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-09 10:25:46.600839 | orchestrator | Wednesday 09 April 2025 10:19:49 +0000 (0:00:01.491) 0:02:01.501 ******* 2025-04-09 10:25:46.600847 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.600856 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.600865 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.600874 | orchestrator | 2025-04-09 10:25:46.600882 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-09 10:25:46.600891 | orchestrator | Wednesday 09 April 2025 10:19:51 +0000 (0:00:02.418) 0:02:03.920 ******* 2025-04-09 10:25:46.600900 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.600908 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.600917 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.600926 | orchestrator | 2025-04-09 10:25:46.600934 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-09 10:25:46.600946 | orchestrator | Wednesday 09 April 2025 10:19:52 +0000 (0:00:01.106) 0:02:05.026 ******* 2025-04-09 10:25:46.600955 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.600964 | orchestrator | 2025-04-09 10:25:46.600972 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-09 10:25:46.600981 | orchestrator | Wednesday 09 April 2025 10:19:54 +0000 (0:00:01.551) 0:02:06.577 ******* 2025-04-09 10:25:46.600990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-09 10:25:46.601005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-09 10:25:46.601026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-09 10:25:46.601036 | orchestrator | 2025-04-09 10:25:46.601044 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-09 10:25:46.601053 | orchestrator | Wednesday 09 April 2025 10:19:57 +0000 (0:00:03.313) 0:02:09.891 ******* 2025-04-09 10:25:46.601062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-09 10:25:46.601071 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.601086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-09 10:25:46.601095 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.601104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-09 10:25:46.601118 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.601127 | orchestrator | 2025-04-09 10:25:46.601136 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-09 10:25:46.601144 | orchestrator | Wednesday 09 April 2025 10:19:59 +0000 (0:00:01.912) 0:02:11.803 ******* 2025-04-09 10:25:46.601159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-09 10:25:46.601168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-09 10:25:46.601179 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.601187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-09 10:25:46.601196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-09 10:25:46.601205 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.601218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-09 10:25:46.601227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-09 10:25:46.601236 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.601245 | orchestrator | 2025-04-09 10:25:46.601253 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-09 10:25:46.601262 | orchestrator | Wednesday 09 April 2025 10:20:01 +0000 (0:00:02.454) 0:02:14.258 ******* 2025-04-09 10:25:46.601271 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.601279 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.601288 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.601297 | orchestrator | 2025-04-09 10:25:46.601310 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-09 10:25:46.601319 | orchestrator | Wednesday 09 April 2025 10:20:02 +0000 (0:00:00.700) 0:02:14.958 ******* 2025-04-09 10:25:46.601327 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.601336 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.601357 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.601366 | orchestrator | 2025-04-09 10:25:46.601375 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-09 10:25:46.601384 | orchestrator | Wednesday 09 April 2025 10:20:04 +0000 (0:00:01.637) 0:02:16.596 ******* 2025-04-09 10:25:46.601392 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.601401 | orchestrator | 2025-04-09 10:25:46.601410 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-09 10:25:46.601418 | orchestrator | Wednesday 09 April 2025 10:20:05 +0000 (0:00:00.847) 0:02:17.443 ******* 2025-04-09 10:25:46.601431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.601442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.601494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.601545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601572 | orchestrator | 2025-04-09 10:25:46.601585 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-09 10:25:46.601594 | orchestrator | Wednesday 09 April 2025 10:20:09 +0000 (0:00:04.803) 0:02:22.246 ******* 2025-04-09 10:25:46.601603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.601612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601651 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.601663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.601673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601712 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.601721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.601730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.601744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602094 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.602103 | orchestrator | 2025-04-09 10:25:46.602111 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-09 10:25:46.602120 | orchestrator | Wednesday 09 April 2025 10:20:10 +0000 (0:00:01.073) 0:02:23.320 ******* 2025-04-09 10:25:46.602135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-09 10:25:46.602144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-09 10:25:46.602153 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.602161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-09 10:25:46.602170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-09 10:25:46.602179 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.602187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-09 10:25:46.602196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-09 10:25:46.602204 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.602213 | orchestrator | 2025-04-09 10:25:46.602221 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-09 10:25:46.602229 | orchestrator | Wednesday 09 April 2025 10:20:12 +0000 (0:00:01.243) 0:02:24.563 ******* 2025-04-09 10:25:46.602238 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.602246 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.602254 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.602262 | orchestrator | 2025-04-09 10:25:46.602271 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-09 10:25:46.602279 | orchestrator | Wednesday 09 April 2025 10:20:13 +0000 (0:00:01.744) 0:02:26.308 ******* 2025-04-09 10:25:46.602287 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.602296 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.602304 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.602313 | orchestrator | 2025-04-09 10:25:46.602321 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-09 10:25:46.602329 | orchestrator | Wednesday 09 April 2025 10:20:16 +0000 (0:00:02.921) 0:02:29.229 ******* 2025-04-09 10:25:46.602338 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.602364 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.602373 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.602381 | orchestrator | 2025-04-09 10:25:46.602390 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-09 10:25:46.602398 | orchestrator | Wednesday 09 April 2025 10:20:17 +0000 (0:00:00.559) 0:02:29.789 ******* 2025-04-09 10:25:46.602406 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.602415 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.602423 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.602431 | orchestrator | 2025-04-09 10:25:46.602440 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-09 10:25:46.602448 | orchestrator | Wednesday 09 April 2025 10:20:18 +0000 (0:00:00.899) 0:02:30.689 ******* 2025-04-09 10:25:46.602468 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.602477 | orchestrator | 2025-04-09 10:25:46.602485 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-09 10:25:46.602494 | orchestrator | Wednesday 09 April 2025 10:20:20 +0000 (0:00:02.004) 0:02:32.693 ******* 2025-04-09 10:25:46.602512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-09 10:25:46.602521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-09 10:25:46.602531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-09 10:25:46.602539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-09 10:25:46.602572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-09 10:25:46.602700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-09 10:25:46.602708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602766 | orchestrator | 2025-04-09 10:25:46.602775 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-09 10:25:46.602783 | orchestrator | Wednesday 09 April 2025 10:20:25 +0000 (0:00:04.779) 0:02:37.472 ******* 2025-04-09 10:25:46.602791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-09 10:25:46.602800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-09 10:25:46.602808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-09 10:25:46.602868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-09 10:25:46.602877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602890 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.602905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.602955 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.602964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-09 10:25:46.602982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-09 10:25:46.602995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.603004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.603013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.603022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.603030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.603039 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.603048 | orchestrator | 2025-04-09 10:25:46.603057 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-09 10:25:46.603066 | orchestrator | Wednesday 09 April 2025 10:20:26 +0000 (0:00:01.255) 0:02:38.728 ******* 2025-04-09 10:25:46.603080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-09 10:25:46.603089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-09 10:25:46.603098 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.603107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-09 10:25:46.603116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-09 10:25:46.603125 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.603134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-09 10:25:46.603146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-09 10:25:46.603155 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.603163 | orchestrator | 2025-04-09 10:25:46.603172 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-09 10:25:46.603184 | orchestrator | Wednesday 09 April 2025 10:20:27 +0000 (0:00:01.453) 0:02:40.182 ******* 2025-04-09 10:25:46.603193 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.603202 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.603211 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.603219 | orchestrator | 2025-04-09 10:25:46.603228 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-09 10:25:46.603237 | orchestrator | Wednesday 09 April 2025 10:20:28 +0000 (0:00:01.136) 0:02:41.318 ******* 2025-04-09 10:25:46.603246 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.603254 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.603263 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.603272 | orchestrator | 2025-04-09 10:25:46.603280 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-09 10:25:46.603289 | orchestrator | Wednesday 09 April 2025 10:20:31 +0000 (0:00:02.553) 0:02:43.872 ******* 2025-04-09 10:25:46.603298 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.603307 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.603315 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.603324 | orchestrator | 2025-04-09 10:25:46.603332 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-09 10:25:46.603357 | orchestrator | Wednesday 09 April 2025 10:20:32 +0000 (0:00:00.553) 0:02:44.426 ******* 2025-04-09 10:25:46.603366 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.603375 | orchestrator | 2025-04-09 10:25:46.603384 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-09 10:25:46.603393 | orchestrator | Wednesday 09 April 2025 10:20:33 +0000 (0:00:01.192) 0:02:45.618 ******* 2025-04-09 10:25:46.603410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-09 10:25:46.603439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-09 10:25:46.603450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.603472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.603483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-09 10:25:46.603496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.603511 | orchestrator | 2025-04-09 10:25:46.603520 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-09 10:25:46.603528 | orchestrator | Wednesday 09 April 2025 10:20:38 +0000 (0:00:05.395) 0:02:51.014 ******* 2025-04-09 10:25:46.603542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-09 10:25:46.603561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.603571 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.603585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-09 10:25:46.603600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-09 10:25:46.603619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.603634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.603648 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.603656 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.603665 | orchestrator | 2025-04-09 10:25:46.603673 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-09 10:25:46.603682 | orchestrator | Wednesday 09 April 2025 10:20:43 +0000 (0:00:04.890) 0:02:55.905 ******* 2025-04-09 10:25:46.603692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-09 10:25:46.603701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-09 10:25:46.603710 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.603723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-09 10:25:46.603732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-09 10:25:46.603741 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.603750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-09 10:25:46.603763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-09 10:25:46.603772 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.603780 | orchestrator | 2025-04-09 10:25:46.603789 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-09 10:25:46.603797 | orchestrator | Wednesday 09 April 2025 10:20:47 +0000 (0:00:04.015) 0:02:59.921 ******* 2025-04-09 10:25:46.603806 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.603814 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.603823 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.603831 | orchestrator | 2025-04-09 10:25:46.603840 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-09 10:25:46.603848 | orchestrator | Wednesday 09 April 2025 10:20:49 +0000 (0:00:01.596) 0:03:01.517 ******* 2025-04-09 10:25:46.603857 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.603865 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.603874 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.603884 | orchestrator | 2025-04-09 10:25:46.603893 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-09 10:25:46.603901 | orchestrator | Wednesday 09 April 2025 10:20:51 +0000 (0:00:02.572) 0:03:04.090 ******* 2025-04-09 10:25:46.603910 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.603920 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.603929 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.603937 | orchestrator | 2025-04-09 10:25:46.603946 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-09 10:25:46.603954 | orchestrator | Wednesday 09 April 2025 10:20:52 +0000 (0:00:00.778) 0:03:04.869 ******* 2025-04-09 10:25:46.603963 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.603971 | orchestrator | 2025-04-09 10:25:46.603980 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-09 10:25:46.603988 | orchestrator | Wednesday 09 April 2025 10:20:53 +0000 (0:00:01.075) 0:03:05.945 ******* 2025-04-09 10:25:46.603997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-09 10:25:46.604011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-09 10:25:46.604027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-09 10:25:46.604036 | orchestrator | 2025-04-09 10:25:46.604044 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-09 10:25:46.604053 | orchestrator | Wednesday 09 April 2025 10:20:59 +0000 (0:00:05.757) 0:03:11.703 ******* 2025-04-09 10:25:46.604067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-09 10:25:46.604077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-09 10:25:46.604086 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.604094 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.604103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-09 10:25:46.604112 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.604120 | orchestrator | 2025-04-09 10:25:46.604132 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-09 10:25:46.604141 | orchestrator | Wednesday 09 April 2025 10:21:00 +0000 (0:00:00.776) 0:03:12.480 ******* 2025-04-09 10:25:46.604150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-09 10:25:46.604158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-09 10:25:46.604171 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.604180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-09 10:25:46.604192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-09 10:25:46.604308 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.604320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-09 10:25:46.604328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-09 10:25:46.604336 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.604363 | orchestrator | 2025-04-09 10:25:46.604372 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-09 10:25:46.604380 | orchestrator | Wednesday 09 April 2025 10:21:00 +0000 (0:00:00.880) 0:03:13.361 ******* 2025-04-09 10:25:46.604388 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.604396 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.604404 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.604412 | orchestrator | 2025-04-09 10:25:46.604420 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-09 10:25:46.604429 | orchestrator | Wednesday 09 April 2025 10:21:02 +0000 (0:00:01.416) 0:03:14.777 ******* 2025-04-09 10:25:46.604437 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.604445 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.604453 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.604461 | orchestrator | 2025-04-09 10:25:46.604469 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-09 10:25:46.604477 | orchestrator | Wednesday 09 April 2025 10:21:04 +0000 (0:00:01.990) 0:03:16.767 ******* 2025-04-09 10:25:46.604485 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.604493 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.604502 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.604510 | orchestrator | 2025-04-09 10:25:46.604518 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-09 10:25:46.604526 | orchestrator | Wednesday 09 April 2025 10:21:04 +0000 (0:00:00.263) 0:03:17.031 ******* 2025-04-09 10:25:46.604534 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.604542 | orchestrator | 2025-04-09 10:25:46.604550 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-09 10:25:46.604558 | orchestrator | Wednesday 09 April 2025 10:21:05 +0000 (0:00:01.092) 0:03:18.123 ******* 2025-04-09 10:25:46.604567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-09 10:25:46.604642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-09 10:25:46.604663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-09 10:25:46.604684 | orchestrator | 2025-04-09 10:25:46.604692 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-09 10:25:46.604701 | orchestrator | Wednesday 09 April 2025 10:21:10 +0000 (0:00:04.526) 0:03:22.650 ******* 2025-04-09 10:25:46.604709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-09 10:25:46.604718 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.604731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-09 10:25:46.604750 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.604759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-09 10:25:46.604772 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.604780 | orchestrator | 2025-04-09 10:25:46.604788 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-09 10:25:46.604796 | orchestrator | Wednesday 09 April 2025 10:21:11 +0000 (0:00:00.806) 0:03:23.457 ******* 2025-04-09 10:25:46.604804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-09 10:25:46.604814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-09 10:25:46.604823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-09 10:25:46.604836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-09 10:25:46.604844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-09 10:25:46.604853 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.604864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-09 10:25:46.604873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-09 10:25:46.604882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-09 10:25:46.604890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-09 10:25:46.604898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-09 10:25:46.604910 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.604919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-09 10:25:46.604934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-09 10:25:46.604944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-09 10:25:46.604953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-09 10:25:46.604963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-09 10:25:46.604972 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.604981 | orchestrator | 2025-04-09 10:25:46.604991 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-09 10:25:46.605000 | orchestrator | Wednesday 09 April 2025 10:21:12 +0000 (0:00:01.069) 0:03:24.526 ******* 2025-04-09 10:25:46.605009 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.605018 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.605027 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.605036 | orchestrator | 2025-04-09 10:25:46.605045 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-09 10:25:46.605054 | orchestrator | Wednesday 09 April 2025 10:21:13 +0000 (0:00:01.448) 0:03:25.975 ******* 2025-04-09 10:25:46.605063 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.605072 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.605081 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.605090 | orchestrator | 2025-04-09 10:25:46.605102 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-09 10:25:46.605112 | orchestrator | Wednesday 09 April 2025 10:21:16 +0000 (0:00:02.768) 0:03:28.745 ******* 2025-04-09 10:25:46.605121 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.605130 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.605139 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.605148 | orchestrator | 2025-04-09 10:25:46.605158 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-09 10:25:46.605167 | orchestrator | Wednesday 09 April 2025 10:21:16 +0000 (0:00:00.394) 0:03:29.139 ******* 2025-04-09 10:25:46.605176 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.605185 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.605194 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.605204 | orchestrator | 2025-04-09 10:25:46.605213 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-09 10:25:46.605222 | orchestrator | Wednesday 09 April 2025 10:21:17 +0000 (0:00:00.638) 0:03:29.777 ******* 2025-04-09 10:25:46.605231 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.605240 | orchestrator | 2025-04-09 10:25:46.605249 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-09 10:25:46.605259 | orchestrator | Wednesday 09 April 2025 10:21:18 +0000 (0:00:01.311) 0:03:31.088 ******* 2025-04-09 10:25:46.605273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-09 10:25:46.605283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-09 10:25:46.605292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-09 10:25:46.605305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-09 10:25:46.605315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-09 10:25:46.605331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-04-09 10:25:46.605382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-09 10:25:46.605392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-09 10:25:46.605400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-09 10:25:46.605409 | orchestrator | 2025-04-09 10:25:46.605417 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-09 10:25:46.605425 | orchestrator | Wednesday 09 April 2025 10:21:22 +0000 (0:00:04.061) 0:03:35.149 ******* 2025-04-09 10:25:46.605441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-09 10:25:46.605462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-09 10:25:46.605470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-09 10:25:46.605478 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.605487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-09 10:25:46.605496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-09 10:25:46.605507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-09 10:25:46.605516 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.605530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-04-09 10:25:46.605543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-09 10:25:46.605551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-09 10:25:46.605559 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.605567 | orchestrator | 2025-04-09 10:25:46.605575 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-09 10:25:46.605584 | orchestrator | Wednesday 09 April 2025 10:21:23 +0000 (0:00:00.868) 0:03:36.018 ******* 2025-04-09 10:25:46.605592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-09 10:25:46.605604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-09 10:25:46.605612 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.605621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-09 10:25:46.605629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-09 10:25:46.605637 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.605645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-09 10:25:46.605657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-04-09 10:25:46.605669 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.605678 | orchestrator | 2025-04-09 10:25:46.605686 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-09 10:25:46.605694 | orchestrator | Wednesday 09 April 2025 10:21:24 +0000 (0:00:01.029) 0:03:37.048 ******* 2025-04-09 10:25:46.605702 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.605710 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.605718 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.605726 | orchestrator | 2025-04-09 10:25:46.605734 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-09 10:25:46.605742 | orchestrator | Wednesday 09 April 2025 10:21:26 +0000 (0:00:01.435) 0:03:38.483 ******* 2025-04-09 10:25:46.605750 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.605758 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.605767 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.605775 | orchestrator | 2025-04-09 10:25:46.605783 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-09 10:25:46.605791 | orchestrator | Wednesday 09 April 2025 10:21:28 +0000 (0:00:02.338) 0:03:40.822 ******* 2025-04-09 10:25:46.605799 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.605807 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.605818 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.605826 | orchestrator | 2025-04-09 10:25:46.605835 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-09 10:25:46.605843 | orchestrator | Wednesday 09 April 2025 10:21:28 +0000 (0:00:00.503) 0:03:41.325 ******* 2025-04-09 10:25:46.605851 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.605923 | orchestrator | 2025-04-09 10:25:46.605930 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-09 10:25:46.605938 | orchestrator | Wednesday 09 April 2025 10:21:30 +0000 (0:00:01.089) 0:03:42.414 ******* 2025-04-09 10:25:46.605945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-09 10:25:46.605954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.605962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-09 10:25:46.605977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-09 10:25:46.605985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.605992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606000 | orchestrator | 2025-04-09 10:25:46.606007 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-09 10:25:46.606036 | orchestrator | Wednesday 09 April 2025 10:21:34 +0000 (0:00:04.226) 0:03:46.641 ******* 2025-04-09 10:25:46.606045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-09 10:25:46.606057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606064 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.606076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-09 10:25:46.606084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606092 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.606099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-09 10:25:46.606107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606119 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.606127 | orchestrator | 2025-04-09 10:25:46.606134 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-09 10:25:46.606141 | orchestrator | Wednesday 09 April 2025 10:21:35 +0000 (0:00:00.975) 0:03:47.616 ******* 2025-04-09 10:25:46.606162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-09 10:25:46.606170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-09 10:25:46.606177 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.606185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-09 10:25:46.606204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-09 10:25:46.606211 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.606219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-09 10:25:46.606227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-09 10:25:46.606234 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.606241 | orchestrator | 2025-04-09 10:25:46.606249 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-09 10:25:46.606256 | orchestrator | Wednesday 09 April 2025 10:21:36 +0000 (0:00:01.271) 0:03:48.888 ******* 2025-04-09 10:25:46.606263 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.606270 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.606277 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.606284 | orchestrator | 2025-04-09 10:25:46.606292 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-09 10:25:46.606299 | orchestrator | Wednesday 09 April 2025 10:21:38 +0000 (0:00:01.510) 0:03:50.399 ******* 2025-04-09 10:25:46.606306 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.606313 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.606320 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.606327 | orchestrator | 2025-04-09 10:25:46.606334 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-09 10:25:46.606352 | orchestrator | Wednesday 09 April 2025 10:21:40 +0000 (0:00:02.629) 0:03:53.029 ******* 2025-04-09 10:25:46.606360 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.606367 | orchestrator | 2025-04-09 10:25:46.606374 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-09 10:25:46.606382 | orchestrator | Wednesday 09 April 2025 10:21:42 +0000 (0:00:01.534) 0:03:54.563 ******* 2025-04-09 10:25:46.606389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-09 10:25:46.606405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-09 10:25:46.606437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-09 10:25:46.606484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606507 | orchestrator | 2025-04-09 10:25:46.606514 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-09 10:25:46.606525 | orchestrator | Wednesday 09 April 2025 10:21:48 +0000 (0:00:05.859) 0:04:00.423 ******* 2025-04-09 10:25:46.606532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-09 10:25:46.606539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606565 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.606572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-09 10:25:46.606580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606606 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.606614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-09 10:25:46.606624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.606668 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.606675 | orchestrator | 2025-04-09 10:25:46.606682 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-09 10:25:46.606689 | orchestrator | Wednesday 09 April 2025 10:21:49 +0000 (0:00:01.301) 0:04:01.725 ******* 2025-04-09 10:25:46.606697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-09 10:25:46.606704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-09 10:25:46.606711 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.606718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-09 10:25:46.606726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-09 10:25:46.606733 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.606740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-09 10:25:46.606747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-09 10:25:46.606754 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.606761 | orchestrator | 2025-04-09 10:25:46.606768 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-09 10:25:46.606775 | orchestrator | Wednesday 09 April 2025 10:21:50 +0000 (0:00:01.312) 0:04:03.037 ******* 2025-04-09 10:25:46.606783 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.606790 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.606797 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.606804 | orchestrator | 2025-04-09 10:25:46.606811 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-09 10:25:46.606818 | orchestrator | Wednesday 09 April 2025 10:21:52 +0000 (0:00:01.454) 0:04:04.492 ******* 2025-04-09 10:25:46.606825 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.606832 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.606839 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.606846 | orchestrator | 2025-04-09 10:25:46.606856 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-09 10:25:46.606864 | orchestrator | Wednesday 09 April 2025 10:21:54 +0000 (0:00:02.376) 0:04:06.869 ******* 2025-04-09 10:25:46.606871 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.606878 | orchestrator | 2025-04-09 10:25:46.606885 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-09 10:25:46.606892 | orchestrator | Wednesday 09 April 2025 10:21:55 +0000 (0:00:01.397) 0:04:08.266 ******* 2025-04-09 10:25:46.606899 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-09 10:25:46.606906 | orchestrator | 2025-04-09 10:25:46.606914 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-09 10:25:46.606924 | orchestrator | Wednesday 09 April 2025 10:21:59 +0000 (0:00:03.197) 0:04:11.464 ******* 2025-04-09 10:25:46.606936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-09 10:25:46.606944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-09 10:25:46.606952 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.606963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-09 10:25:46.606975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-09 10:25:46.606983 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.606990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-09 10:25:46.606998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-09 10:25:46.607006 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607013 | orchestrator | 2025-04-09 10:25:46.607020 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-09 10:25:46.607027 | orchestrator | Wednesday 09 April 2025 10:22:02 +0000 (0:00:03.237) 0:04:14.701 ******* 2025-04-09 10:25:46.607039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-09 10:25:46.607051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-09 10:25:46.607058 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-09 10:25:46.607080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-09 10:25:46.607088 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-09 10:25:46.607103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-09 10:25:46.607111 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607118 | orchestrator | 2025-04-09 10:25:46.607125 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-09 10:25:46.607132 | orchestrator | Wednesday 09 April 2025 10:22:05 +0000 (0:00:03.392) 0:04:18.093 ******* 2025-04-09 10:25:46.607140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-09 10:25:46.607151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-09 10:25:46.607158 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-09 10:25:46.607177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-09 10:25:46.607185 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-09 10:25:46.607200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-09 10:25:46.607207 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607214 | orchestrator | 2025-04-09 10:25:46.607221 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-09 10:25:46.607228 | orchestrator | Wednesday 09 April 2025 10:22:09 +0000 (0:00:04.085) 0:04:22.179 ******* 2025-04-09 10:25:46.607236 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.607243 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.607250 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.607257 | orchestrator | 2025-04-09 10:25:46.607264 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-09 10:25:46.607271 | orchestrator | Wednesday 09 April 2025 10:22:12 +0000 (0:00:02.806) 0:04:24.986 ******* 2025-04-09 10:25:46.607281 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607289 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607296 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607303 | orchestrator | 2025-04-09 10:25:46.607310 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-09 10:25:46.607317 | orchestrator | Wednesday 09 April 2025 10:22:14 +0000 (0:00:01.850) 0:04:26.836 ******* 2025-04-09 10:25:46.607324 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607331 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607349 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607357 | orchestrator | 2025-04-09 10:25:46.607364 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-09 10:25:46.607372 | orchestrator | Wednesday 09 April 2025 10:22:15 +0000 (0:00:00.609) 0:04:27.446 ******* 2025-04-09 10:25:46.607379 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.607386 | orchestrator | 2025-04-09 10:25:46.607393 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-09 10:25:46.607400 | orchestrator | Wednesday 09 April 2025 10:22:16 +0000 (0:00:01.600) 0:04:29.047 ******* 2025-04-09 10:25:46.607412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-09 10:25:46.607420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-09 10:25:46.607428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-09 10:25:46.607435 | orchestrator | 2025-04-09 10:25:46.607442 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-09 10:25:46.607449 | orchestrator | Wednesday 09 April 2025 10:22:18 +0000 (0:00:01.920) 0:04:30.967 ******* 2025-04-09 10:25:46.607457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-09 10:25:46.607468 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-09 10:25:46.607483 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-09 10:25:46.607501 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607508 | orchestrator | 2025-04-09 10:25:46.607515 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-09 10:25:46.607522 | orchestrator | Wednesday 09 April 2025 10:22:18 +0000 (0:00:00.401) 0:04:31.368 ******* 2025-04-09 10:25:46.607530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-09 10:25:46.607537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-09 10:25:46.607544 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607551 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-09 10:25:46.607566 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607573 | orchestrator | 2025-04-09 10:25:46.607580 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-09 10:25:46.607587 | orchestrator | Wednesday 09 April 2025 10:22:20 +0000 (0:00:01.101) 0:04:32.470 ******* 2025-04-09 10:25:46.607594 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607601 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607615 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607622 | orchestrator | 2025-04-09 10:25:46.607630 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-09 10:25:46.607637 | orchestrator | Wednesday 09 April 2025 10:22:20 +0000 (0:00:00.446) 0:04:32.916 ******* 2025-04-09 10:25:46.607644 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607651 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607658 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607665 | orchestrator | 2025-04-09 10:25:46.607672 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-09 10:25:46.607679 | orchestrator | Wednesday 09 April 2025 10:22:22 +0000 (0:00:01.690) 0:04:34.607 ******* 2025-04-09 10:25:46.607686 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.607693 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.607701 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.607708 | orchestrator | 2025-04-09 10:25:46.607715 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-09 10:25:46.607724 | orchestrator | Wednesday 09 April 2025 10:22:22 +0000 (0:00:00.585) 0:04:35.193 ******* 2025-04-09 10:25:46.607732 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.607739 | orchestrator | 2025-04-09 10:25:46.607746 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-09 10:25:46.607753 | orchestrator | Wednesday 09 April 2025 10:22:24 +0000 (0:00:01.586) 0:04:36.779 ******* 2025-04-09 10:25:46.607760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-09 10:25:46.607772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-09 10:25:46.607808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.607831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.607843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.607862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.607876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.607884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.607908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.607919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-09 10:25:46.607934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-09 10:25:46.607958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-09 10:25:46.607985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.607992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.608196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-09 10:25:46.608213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.608237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.608338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.608401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.608451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.608493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.608501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.608509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608569 | orchestrator | 2025-04-09 10:25:46.608594 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-09 10:25:46.608602 | orchestrator | Wednesday 09 April 2025 10:22:29 +0000 (0:00:05.266) 0:04:42.046 ******* 2025-04-09 10:25:46.608610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-09 10:25:46.608618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-09 10:25:46.608702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-09 10:25:46.608727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.608833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-09 10:25:46.608859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.608906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.608941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.608953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.609010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.609028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.609036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.609067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.609074 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.609114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.609131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.609144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609157 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.609165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-09 10:25:46.609214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-09 10:25:46.609251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.609305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.609316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.609334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.609366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-09 10:25:46.609372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-09 10:25:46.609412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-09 10:25:46.609419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.609430 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.609437 | orchestrator | 2025-04-09 10:25:46.609443 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-09 10:25:46.609450 | orchestrator | Wednesday 09 April 2025 10:22:31 +0000 (0:00:01.944) 0:04:43.991 ******* 2025-04-09 10:25:46.609456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-09 10:25:46.609464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-09 10:25:46.609470 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.609479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-09 10:25:46.609486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-09 10:25:46.609492 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.609498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-09 10:25:46.609505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-09 10:25:46.609511 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.609517 | orchestrator | 2025-04-09 10:25:46.609524 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-09 10:25:46.609530 | orchestrator | Wednesday 09 April 2025 10:22:33 +0000 (0:00:02.346) 0:04:46.337 ******* 2025-04-09 10:25:46.609536 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.609543 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.609549 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.609555 | orchestrator | 2025-04-09 10:25:46.609562 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-09 10:25:46.609574 | orchestrator | Wednesday 09 April 2025 10:22:35 +0000 (0:00:01.509) 0:04:47.847 ******* 2025-04-09 10:25:46.609582 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.609588 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.609609 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.609617 | orchestrator | 2025-04-09 10:25:46.609624 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-09 10:25:46.609631 | orchestrator | Wednesday 09 April 2025 10:22:37 +0000 (0:00:02.179) 0:04:50.026 ******* 2025-04-09 10:25:46.609638 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.609645 | orchestrator | 2025-04-09 10:25:46.609652 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-09 10:25:46.609659 | orchestrator | Wednesday 09 April 2025 10:22:39 +0000 (0:00:01.656) 0:04:51.683 ******* 2025-04-09 10:25:46.609671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.609683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.609691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.609698 | orchestrator | 2025-04-09 10:25:46.609704 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-09 10:25:46.609711 | orchestrator | Wednesday 09 April 2025 10:22:43 +0000 (0:00:04.239) 0:04:55.923 ******* 2025-04-09 10:25:46.609732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.609741 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.609752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.609765 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.609772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.609779 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.609786 | orchestrator | 2025-04-09 10:25:46.609793 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-09 10:25:46.609800 | orchestrator | Wednesday 09 April 2025 10:22:44 +0000 (0:00:01.033) 0:04:56.957 ******* 2025-04-09 10:25:46.609806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-09 10:25:46.609814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-09 10:25:46.609821 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.609827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-09 10:25:46.609834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-09 10:25:46.609841 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.609848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-09 10:25:46.609855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-09 10:25:46.609862 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.609869 | orchestrator | 2025-04-09 10:25:46.609875 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-09 10:25:46.609882 | orchestrator | Wednesday 09 April 2025 10:22:45 +0000 (0:00:00.953) 0:04:57.910 ******* 2025-04-09 10:25:46.609889 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.609896 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.609903 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.609909 | orchestrator | 2025-04-09 10:25:46.609916 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-09 10:25:46.609923 | orchestrator | Wednesday 09 April 2025 10:22:46 +0000 (0:00:01.397) 0:04:59.307 ******* 2025-04-09 10:25:46.609944 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.609953 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.609960 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.609971 | orchestrator | 2025-04-09 10:25:46.609979 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-09 10:25:46.609986 | orchestrator | Wednesday 09 April 2025 10:22:49 +0000 (0:00:02.391) 0:05:01.699 ******* 2025-04-09 10:25:46.609993 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.610001 | orchestrator | 2025-04-09 10:25:46.610008 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-09 10:25:46.610037 | orchestrator | Wednesday 09 April 2025 10:22:50 +0000 (0:00:01.414) 0:05:03.114 ******* 2025-04-09 10:25:46.610045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.610053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.610103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.610114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610147 | orchestrator | 2025-04-09 10:25:46.610155 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-09 10:25:46.610166 | orchestrator | Wednesday 09 April 2025 10:22:56 +0000 (0:00:05.371) 0:05:08.485 ******* 2025-04-09 10:25:46.610189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.610198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610213 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.610228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.610236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610280 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.610288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.610301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.610315 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.610322 | orchestrator | 2025-04-09 10:25:46.610329 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-09 10:25:46.610374 | orchestrator | Wednesday 09 April 2025 10:22:57 +0000 (0:00:01.000) 0:05:09.486 ******* 2025-04-09 10:25:46.610386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610416 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.610438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610469 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.610475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-09 10:25:46.610500 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.610507 | orchestrator | 2025-04-09 10:25:46.610513 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-09 10:25:46.610519 | orchestrator | Wednesday 09 April 2025 10:22:58 +0000 (0:00:01.394) 0:05:10.881 ******* 2025-04-09 10:25:46.610526 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.610532 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.610538 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.610544 | orchestrator | 2025-04-09 10:25:46.610551 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-09 10:25:46.610557 | orchestrator | Wednesday 09 April 2025 10:23:00 +0000 (0:00:01.569) 0:05:12.450 ******* 2025-04-09 10:25:46.610563 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.610569 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.610576 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.610582 | orchestrator | 2025-04-09 10:25:46.610589 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-09 10:25:46.610598 | orchestrator | Wednesday 09 April 2025 10:23:02 +0000 (0:00:02.519) 0:05:14.970 ******* 2025-04-09 10:25:46.610605 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.610611 | orchestrator | 2025-04-09 10:25:46.610617 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-09 10:25:46.610623 | orchestrator | Wednesday 09 April 2025 10:23:04 +0000 (0:00:01.782) 0:05:16.753 ******* 2025-04-09 10:25:46.610630 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-09 10:25:46.610637 | orchestrator | 2025-04-09 10:25:46.610643 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-09 10:25:46.610649 | orchestrator | Wednesday 09 April 2025 10:23:05 +0000 (0:00:01.336) 0:05:18.089 ******* 2025-04-09 10:25:46.610656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-09 10:25:46.610663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-09 10:25:46.610683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-09 10:25:46.610691 | orchestrator | 2025-04-09 10:25:46.610697 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-09 10:25:46.610704 | orchestrator | Wednesday 09 April 2025 10:23:10 +0000 (0:00:05.185) 0:05:23.275 ******* 2025-04-09 10:25:46.610710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.610717 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.610723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.610730 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.610740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.610747 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.610753 | orchestrator | 2025-04-09 10:25:46.610759 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-09 10:25:46.610766 | orchestrator | Wednesday 09 April 2025 10:23:12 +0000 (0:00:01.931) 0:05:25.207 ******* 2025-04-09 10:25:46.610772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-09 10:25:46.610779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-09 10:25:46.610785 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.610791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-09 10:25:46.610800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-09 10:25:46.610807 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.610813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-09 10:25:46.610819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-09 10:25:46.610826 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.610832 | orchestrator | 2025-04-09 10:25:46.610851 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-09 10:25:46.610858 | orchestrator | Wednesday 09 April 2025 10:23:14 +0000 (0:00:01.982) 0:05:27.189 ******* 2025-04-09 10:25:46.610865 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.610871 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.610900 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.610907 | orchestrator | 2025-04-09 10:25:46.610913 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-09 10:25:46.610919 | orchestrator | Wednesday 09 April 2025 10:23:17 +0000 (0:00:03.137) 0:05:30.326 ******* 2025-04-09 10:25:46.610925 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.610931 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.610937 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.610942 | orchestrator | 2025-04-09 10:25:46.610948 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-09 10:25:46.610954 | orchestrator | Wednesday 09 April 2025 10:23:21 +0000 (0:00:03.580) 0:05:33.907 ******* 2025-04-09 10:25:46.610960 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-04-09 10:25:46.610967 | orchestrator | 2025-04-09 10:25:46.610973 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-09 10:25:46.610983 | orchestrator | Wednesday 09 April 2025 10:23:22 +0000 (0:00:01.371) 0:05:35.279 ******* 2025-04-09 10:25:46.610989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.610995 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.611008 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.611025 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611031 | orchestrator | 2025-04-09 10:25:46.611037 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-09 10:25:46.611043 | orchestrator | Wednesday 09 April 2025 10:23:24 +0000 (0:00:01.851) 0:05:37.130 ******* 2025-04-09 10:25:46.611049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.611055 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.611068 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-09 10:25:46.611100 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611106 | orchestrator | 2025-04-09 10:25:46.611112 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-09 10:25:46.611121 | orchestrator | Wednesday 09 April 2025 10:23:26 +0000 (0:00:01.986) 0:05:39.117 ******* 2025-04-09 10:25:46.611127 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611133 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611139 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611145 | orchestrator | 2025-04-09 10:25:46.611150 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-09 10:25:46.611156 | orchestrator | Wednesday 09 April 2025 10:23:28 +0000 (0:00:01.998) 0:05:41.115 ******* 2025-04-09 10:25:46.611162 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.611168 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.611174 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.611180 | orchestrator | 2025-04-09 10:25:46.611186 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-09 10:25:46.611192 | orchestrator | Wednesday 09 April 2025 10:23:31 +0000 (0:00:02.932) 0:05:44.048 ******* 2025-04-09 10:25:46.611198 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.611204 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.611209 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.611215 | orchestrator | 2025-04-09 10:25:46.611221 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-09 10:25:46.611227 | orchestrator | Wednesday 09 April 2025 10:23:35 +0000 (0:00:03.805) 0:05:47.854 ******* 2025-04-09 10:25:46.611233 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-09 10:25:46.611240 | orchestrator | 2025-04-09 10:25:46.611248 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-09 10:25:46.611254 | orchestrator | Wednesday 09 April 2025 10:23:36 +0000 (0:00:01.309) 0:05:49.163 ******* 2025-04-09 10:25:46.611260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-09 10:25:46.611266 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-09 10:25:46.611278 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-09 10:25:46.611297 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611303 | orchestrator | 2025-04-09 10:25:46.611312 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-09 10:25:46.611318 | orchestrator | Wednesday 09 April 2025 10:23:38 +0000 (0:00:01.988) 0:05:51.151 ******* 2025-04-09 10:25:46.611349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-09 10:25:46.611357 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-09 10:25:46.611370 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-09 10:25:46.611382 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611388 | orchestrator | 2025-04-09 10:25:46.611394 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-09 10:25:46.611400 | orchestrator | Wednesday 09 April 2025 10:23:40 +0000 (0:00:02.065) 0:05:53.217 ******* 2025-04-09 10:25:46.611406 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611412 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611418 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611423 | orchestrator | 2025-04-09 10:25:46.611429 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-09 10:25:46.611435 | orchestrator | Wednesday 09 April 2025 10:23:43 +0000 (0:00:02.333) 0:05:55.550 ******* 2025-04-09 10:25:46.611441 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.611447 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.611453 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.611459 | orchestrator | 2025-04-09 10:25:46.611465 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-09 10:25:46.611471 | orchestrator | Wednesday 09 April 2025 10:23:46 +0000 (0:00:03.234) 0:05:58.785 ******* 2025-04-09 10:25:46.611477 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.611483 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.611489 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.611495 | orchestrator | 2025-04-09 10:25:46.611501 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-09 10:25:46.611507 | orchestrator | Wednesday 09 April 2025 10:23:50 +0000 (0:00:04.022) 0:06:02.808 ******* 2025-04-09 10:25:46.611513 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.611519 | orchestrator | 2025-04-09 10:25:46.611525 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-09 10:25:46.611531 | orchestrator | Wednesday 09 April 2025 10:23:52 +0000 (0:00:01.761) 0:06:04.570 ******* 2025-04-09 10:25:46.611538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.611566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.611574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-09 10:25:46.611581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-09 10:25:46.611587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.611640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.611646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.611653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-09 10:25:46.611668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.611687 | orchestrator | 2025-04-09 10:25:46.611706 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-09 10:25:46.611713 | orchestrator | Wednesday 09 April 2025 10:23:56 +0000 (0:00:04.426) 0:06:08.996 ******* 2025-04-09 10:25:46.611720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.611726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-09 10:25:46.611732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.611760 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.611786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-09 10:25:46.611792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.611820 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.611832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-09 10:25:46.611852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-09 10:25:46.611866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-09 10:25:46.611876 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611882 | orchestrator | 2025-04-09 10:25:46.611888 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-09 10:25:46.611894 | orchestrator | Wednesday 09 April 2025 10:23:57 +0000 (0:00:01.097) 0:06:10.094 ******* 2025-04-09 10:25:46.611900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-09 10:25:46.611906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-09 10:25:46.611913 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.611919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-09 10:25:46.611925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-09 10:25:46.611931 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.611937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-09 10:25:46.611943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-09 10:25:46.611949 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.611955 | orchestrator | 2025-04-09 10:25:46.611961 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-09 10:25:46.611967 | orchestrator | Wednesday 09 April 2025 10:23:59 +0000 (0:00:01.503) 0:06:11.598 ******* 2025-04-09 10:25:46.611973 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.611978 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.611984 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.611990 | orchestrator | 2025-04-09 10:25:46.611996 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-09 10:25:46.612002 | orchestrator | Wednesday 09 April 2025 10:24:00 +0000 (0:00:01.316) 0:06:12.915 ******* 2025-04-09 10:25:46.612008 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.612014 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.612020 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.612026 | orchestrator | 2025-04-09 10:25:46.612032 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-09 10:25:46.612038 | orchestrator | Wednesday 09 April 2025 10:24:03 +0000 (0:00:02.737) 0:06:15.652 ******* 2025-04-09 10:25:46.612056 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.612063 | orchestrator | 2025-04-09 10:25:46.612069 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-09 10:25:46.612075 | orchestrator | Wednesday 09 April 2025 10:24:05 +0000 (0:00:01.848) 0:06:17.500 ******* 2025-04-09 10:25:46.612086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-09 10:25:46.612096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-09 10:25:46.612103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-09 10:25:46.612109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-09 10:25:46.612135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-09 10:25:46.612146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-09 10:25:46.612152 | orchestrator | 2025-04-09 10:25:46.612158 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-09 10:25:46.612165 | orchestrator | Wednesday 09 April 2025 10:24:12 +0000 (0:00:07.434) 0:06:24.934 ******* 2025-04-09 10:25:46.612171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-09 10:25:46.612192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-09 10:25:46.612200 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.612221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-09 10:25:46.612232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-09 10:25:46.612238 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.612244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-09 10:25:46.612255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-09 10:25:46.612261 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.612267 | orchestrator | 2025-04-09 10:25:46.612274 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-09 10:25:46.612280 | orchestrator | Wednesday 09 April 2025 10:24:13 +0000 (0:00:00.918) 0:06:25.853 ******* 2025-04-09 10:25:46.612286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-09 10:25:46.612304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-09 10:25:46.612315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-09 10:25:46.612321 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.612327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-09 10:25:46.612333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-09 10:25:46.612352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-09 10:25:46.612358 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.612365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-09 10:25:46.612371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-09 10:25:46.612377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-09 10:25:46.612383 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.612390 | orchestrator | 2025-04-09 10:25:46.612396 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-09 10:25:46.612402 | orchestrator | Wednesday 09 April 2025 10:24:14 +0000 (0:00:01.494) 0:06:27.348 ******* 2025-04-09 10:25:46.612408 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.612415 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.612421 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.612427 | orchestrator | 2025-04-09 10:25:46.612436 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-09 10:25:46.612442 | orchestrator | Wednesday 09 April 2025 10:24:15 +0000 (0:00:01.020) 0:06:28.368 ******* 2025-04-09 10:25:46.612448 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.612454 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.612459 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.612465 | orchestrator | 2025-04-09 10:25:46.612471 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-09 10:25:46.612477 | orchestrator | Wednesday 09 April 2025 10:24:17 +0000 (0:00:01.501) 0:06:29.870 ******* 2025-04-09 10:25:46.612483 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.612489 | orchestrator | 2025-04-09 10:25:46.612495 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-09 10:25:46.612501 | orchestrator | Wednesday 09 April 2025 10:24:19 +0000 (0:00:02.017) 0:06:31.887 ******* 2025-04-09 10:25:46.612507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-09 10:25:46.612530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-09 10:25:46.612537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-09 10:25:46.612544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-09 10:25:46.612563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-09 10:25:46.612617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-09 10:25:46.612626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-09 10:25:46.612674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-09 10:25:46.612680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-09 10:25:46.612687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-09 10:25:46.612696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-09 10:25:46.612773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-09 10:25:46.612780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612813 | orchestrator | 2025-04-09 10:25:46.612819 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-09 10:25:46.612825 | orchestrator | Wednesday 09 April 2025 10:24:24 +0000 (0:00:05.157) 0:06:37.045 ******* 2025-04-09 10:25:46.612831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-09 10:25:46.612837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-09 10:25:46.612849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-09 10:25:46.612878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-09 10:25:46.612884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612916 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.612923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-09 10:25:46.612932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-09 10:25:46.612938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.612957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.612964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-09 10:25:46.612970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-09 10:25:46.612991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-09 10:25:46.612997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-09 10:25:46.613004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.613045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.613052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-09 10:25:46.613062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613068 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-09 10:25:46.613083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-09 10:25:46.613106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/prometheus-msteams:2024.1', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-09 10:25:46.613112 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613118 | orchestrator | 2025-04-09 10:25:46.613124 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-09 10:25:46.613130 | orchestrator | Wednesday 09 April 2025 10:24:26 +0000 (0:00:01.540) 0:06:38.586 ******* 2025-04-09 10:25:46.613136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-09 10:25:46.613142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-09 10:25:46.613148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-09 10:25:46.613155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-09 10:25:46.613161 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-09 10:25:46.613175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-09 10:25:46.613182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-09 10:25:46.613188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-09 10:25:46.613194 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-09 10:25:46.613211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-09 10:25:46.613218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-09 10:25:46.613224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-09 10:25:46.613230 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613236 | orchestrator | 2025-04-09 10:25:46.613242 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-09 10:25:46.613248 | orchestrator | Wednesday 09 April 2025 10:24:27 +0000 (0:00:01.740) 0:06:40.327 ******* 2025-04-09 10:25:46.613254 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613260 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613266 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613272 | orchestrator | 2025-04-09 10:25:46.613278 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-09 10:25:46.613284 | orchestrator | Wednesday 09 April 2025 10:24:28 +0000 (0:00:00.798) 0:06:41.125 ******* 2025-04-09 10:25:46.613290 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613296 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613302 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613307 | orchestrator | 2025-04-09 10:25:46.613313 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-09 10:25:46.613319 | orchestrator | Wednesday 09 April 2025 10:24:30 +0000 (0:00:02.110) 0:06:43.235 ******* 2025-04-09 10:25:46.613325 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.613331 | orchestrator | 2025-04-09 10:25:46.613337 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-09 10:25:46.613377 | orchestrator | Wednesday 09 April 2025 10:24:32 +0000 (0:00:01.583) 0:06:44.819 ******* 2025-04-09 10:25:46.613384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:25:46.613402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:25:46.613414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-09 10:25:46.613420 | orchestrator | 2025-04-09 10:25:46.613426 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-09 10:25:46.613432 | orchestrator | Wednesday 09 April 2025 10:24:35 +0000 (0:00:03.007) 0:06:47.827 ******* 2025-04-09 10:25:46.613438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-09 10:25:46.613445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-09 10:25:46.613456 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613462 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-09 10:25:46.613480 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613485 | orchestrator | 2025-04-09 10:25:46.613491 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-09 10:25:46.613496 | orchestrator | Wednesday 09 April 2025 10:24:36 +0000 (0:00:00.738) 0:06:48.566 ******* 2025-04-09 10:25:46.613501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-09 10:25:46.613507 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-09 10:25:46.613518 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-09 10:25:46.613530 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613535 | orchestrator | 2025-04-09 10:25:46.613540 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-09 10:25:46.613546 | orchestrator | Wednesday 09 April 2025 10:24:37 +0000 (0:00:00.913) 0:06:49.479 ******* 2025-04-09 10:25:46.613551 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613556 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613562 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613567 | orchestrator | 2025-04-09 10:25:46.613573 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-09 10:25:46.613578 | orchestrator | Wednesday 09 April 2025 10:24:37 +0000 (0:00:00.796) 0:06:50.276 ******* 2025-04-09 10:25:46.613583 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613589 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613594 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613599 | orchestrator | 2025-04-09 10:25:46.613605 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-09 10:25:46.613610 | orchestrator | Wednesday 09 April 2025 10:24:39 +0000 (0:00:01.908) 0:06:52.184 ******* 2025-04-09 10:25:46.613615 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-09 10:25:46.613621 | orchestrator | 2025-04-09 10:25:46.613626 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-09 10:25:46.613632 | orchestrator | Wednesday 09 April 2025 10:24:41 +0000 (0:00:01.919) 0:06:54.104 ******* 2025-04-09 10:25:46.613637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.613649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.613655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.613660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.613680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.613689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-09 10:25:46.613694 | orchestrator | 2025-04-09 10:25:46.613700 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-09 10:25:46.613708 | orchestrator | Wednesday 09 April 2025 10:24:49 +0000 (0:00:07.994) 0:07:02.099 ******* 2025-04-09 10:25:46.613713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.613719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.613725 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.613743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.613748 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.613762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-09 10:25:46.613767 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613773 | orchestrator | 2025-04-09 10:25:46.613778 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-09 10:25:46.613784 | orchestrator | Wednesday 09 April 2025 10:24:50 +0000 (0:00:01.042) 0:07:03.141 ******* 2025-04-09 10:25:46.613789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613820 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613842 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-09 10:25:46.613876 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613882 | orchestrator | 2025-04-09 10:25:46.613887 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-09 10:25:46.613892 | orchestrator | Wednesday 09 April 2025 10:24:52 +0000 (0:00:01.611) 0:07:04.753 ******* 2025-04-09 10:25:46.613898 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.613903 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.613909 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.613914 | orchestrator | 2025-04-09 10:25:46.613919 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-09 10:25:46.613925 | orchestrator | Wednesday 09 April 2025 10:24:53 +0000 (0:00:01.229) 0:07:05.983 ******* 2025-04-09 10:25:46.613930 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.613935 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.613941 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.613946 | orchestrator | 2025-04-09 10:25:46.613952 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-09 10:25:46.613957 | orchestrator | Wednesday 09 April 2025 10:24:56 +0000 (0:00:02.722) 0:07:08.706 ******* 2025-04-09 10:25:46.613962 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.613968 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.613973 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.613978 | orchestrator | 2025-04-09 10:25:46.613984 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-09 10:25:46.613992 | orchestrator | Wednesday 09 April 2025 10:24:56 +0000 (0:00:00.643) 0:07:09.349 ******* 2025-04-09 10:25:46.613998 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614003 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614008 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614032 | orchestrator | 2025-04-09 10:25:46.614039 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-09 10:25:46.614044 | orchestrator | Wednesday 09 April 2025 10:24:57 +0000 (0:00:00.587) 0:07:09.937 ******* 2025-04-09 10:25:46.614049 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614055 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614063 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614068 | orchestrator | 2025-04-09 10:25:46.614074 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-09 10:25:46.614079 | orchestrator | Wednesday 09 April 2025 10:24:57 +0000 (0:00:00.306) 0:07:10.244 ******* 2025-04-09 10:25:46.614084 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614090 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614095 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614101 | orchestrator | 2025-04-09 10:25:46.614106 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-09 10:25:46.614111 | orchestrator | Wednesday 09 April 2025 10:24:58 +0000 (0:00:00.624) 0:07:10.868 ******* 2025-04-09 10:25:46.614117 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614122 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614183 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614189 | orchestrator | 2025-04-09 10:25:46.614195 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-09 10:25:46.614200 | orchestrator | Wednesday 09 April 2025 10:24:59 +0000 (0:00:00.594) 0:07:11.463 ******* 2025-04-09 10:25:46.614205 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614211 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614216 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614222 | orchestrator | 2025-04-09 10:25:46.614227 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-09 10:25:46.614232 | orchestrator | Wednesday 09 April 2025 10:24:59 +0000 (0:00:00.782) 0:07:12.245 ******* 2025-04-09 10:25:46.614238 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614243 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614248 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614254 | orchestrator | 2025-04-09 10:25:46.614259 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-09 10:25:46.614265 | orchestrator | Wednesday 09 April 2025 10:25:01 +0000 (0:00:01.136) 0:07:13.382 ******* 2025-04-09 10:25:46.614270 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614276 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614281 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614286 | orchestrator | 2025-04-09 10:25:46.614292 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-09 10:25:46.614297 | orchestrator | Wednesday 09 April 2025 10:25:01 +0000 (0:00:00.640) 0:07:14.022 ******* 2025-04-09 10:25:46.614302 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614308 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614313 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614318 | orchestrator | 2025-04-09 10:25:46.614324 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-09 10:25:46.614329 | orchestrator | Wednesday 09 April 2025 10:25:02 +0000 (0:00:01.039) 0:07:15.061 ******* 2025-04-09 10:25:46.614335 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614351 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614357 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614362 | orchestrator | 2025-04-09 10:25:46.614368 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-09 10:25:46.614373 | orchestrator | Wednesday 09 April 2025 10:25:04 +0000 (0:00:01.362) 0:07:16.423 ******* 2025-04-09 10:25:46.614382 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614387 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614393 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614398 | orchestrator | 2025-04-09 10:25:46.614406 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-09 10:25:46.614414 | orchestrator | Wednesday 09 April 2025 10:25:05 +0000 (0:00:01.302) 0:07:17.726 ******* 2025-04-09 10:25:46.614419 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.614425 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.614430 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.614435 | orchestrator | 2025-04-09 10:25:46.614441 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-09 10:25:46.614446 | orchestrator | Wednesday 09 April 2025 10:25:14 +0000 (0:00:08.887) 0:07:26.614 ******* 2025-04-09 10:25:46.614451 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614457 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614462 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614467 | orchestrator | 2025-04-09 10:25:46.614473 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-09 10:25:46.614478 | orchestrator | Wednesday 09 April 2025 10:25:15 +0000 (0:00:01.111) 0:07:27.726 ******* 2025-04-09 10:25:46.614484 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.614489 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.614494 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.614500 | orchestrator | 2025-04-09 10:25:46.614505 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-09 10:25:46.614510 | orchestrator | Wednesday 09 April 2025 10:25:24 +0000 (0:00:08.657) 0:07:36.383 ******* 2025-04-09 10:25:46.614515 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614521 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614526 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614531 | orchestrator | 2025-04-09 10:25:46.614537 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-09 10:25:46.614542 | orchestrator | Wednesday 09 April 2025 10:25:28 +0000 (0:00:04.966) 0:07:41.349 ******* 2025-04-09 10:25:46.614548 | orchestrator | changed: [testbed-node-0] 2025-04-09 10:25:46.614553 | orchestrator | changed: [testbed-node-2] 2025-04-09 10:25:46.614558 | orchestrator | changed: [testbed-node-1] 2025-04-09 10:25:46.614564 | orchestrator | 2025-04-09 10:25:46.614569 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-09 10:25:46.614574 | orchestrator | Wednesday 09 April 2025 10:25:33 +0000 (0:00:04.816) 0:07:46.166 ******* 2025-04-09 10:25:46.614580 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614585 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614590 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614596 | orchestrator | 2025-04-09 10:25:46.614601 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-09 10:25:46.614607 | orchestrator | Wednesday 09 April 2025 10:25:34 +0000 (0:00:00.618) 0:07:46.784 ******* 2025-04-09 10:25:46.614612 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614617 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614622 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614628 | orchestrator | 2025-04-09 10:25:46.614633 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-09 10:25:46.614639 | orchestrator | Wednesday 09 April 2025 10:25:35 +0000 (0:00:00.622) 0:07:47.407 ******* 2025-04-09 10:25:46.614644 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614649 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614655 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614660 | orchestrator | 2025-04-09 10:25:46.614665 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-09 10:25:46.614671 | orchestrator | Wednesday 09 April 2025 10:25:35 +0000 (0:00:00.342) 0:07:47.750 ******* 2025-04-09 10:25:46.614676 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614684 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614692 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614697 | orchestrator | 2025-04-09 10:25:46.614703 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-09 10:25:46.614708 | orchestrator | Wednesday 09 April 2025 10:25:36 +0000 (0:00:00.746) 0:07:48.496 ******* 2025-04-09 10:25:46.614713 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614719 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614724 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614729 | orchestrator | 2025-04-09 10:25:46.614735 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-09 10:25:46.614740 | orchestrator | Wednesday 09 April 2025 10:25:36 +0000 (0:00:00.625) 0:07:49.121 ******* 2025-04-09 10:25:46.614746 | orchestrator | skipping: [testbed-node-0] 2025-04-09 10:25:46.614751 | orchestrator | skipping: [testbed-node-1] 2025-04-09 10:25:46.614756 | orchestrator | skipping: [testbed-node-2] 2025-04-09 10:25:46.614762 | orchestrator | 2025-04-09 10:25:46.614767 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-09 10:25:46.614772 | orchestrator | Wednesday 09 April 2025 10:25:37 +0000 (0:00:00.638) 0:07:49.760 ******* 2025-04-09 10:25:46.614778 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614783 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614788 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614794 | orchestrator | 2025-04-09 10:25:46.614799 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-09 10:25:46.614804 | orchestrator | Wednesday 09 April 2025 10:25:42 +0000 (0:00:04.852) 0:07:54.612 ******* 2025-04-09 10:25:46.614810 | orchestrator | ok: [testbed-node-0] 2025-04-09 10:25:46.614815 | orchestrator | ok: [testbed-node-1] 2025-04-09 10:25:46.614820 | orchestrator | ok: [testbed-node-2] 2025-04-09 10:25:46.614826 | orchestrator | 2025-04-09 10:25:46.614831 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-09 10:25:46.614836 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-04-09 10:25:46.614842 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-04-09 10:25:46.614848 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=91  rescued=0 ignored=0 2025-04-09 10:25:46.614853 | orchestrator | 2025-04-09 10:25:46.614858 | orchestrator | 2025-04-09 10:25:46.614864 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-09 10:25:46.614871 | orchestrator | Wednesday 09 April 2025 10:25:43 +0000 (0:00:01.123) 0:07:55.736 ******* 2025-04-09 10:25:49.653140 | orchestrator | =============================================================================== 2025-04-09 10:25:49.653252 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.89s 2025-04-09 10:25:49.653271 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.66s 2025-04-09 10:25:49.653286 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.99s 2025-04-09 10:25:49.653319 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.43s 2025-04-09 10:25:49.653334 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 7.43s 2025-04-09 10:25:49.653395 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.92s 2025-04-09 10:25:49.653410 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.37s 2025-04-09 10:25:49.653425 | orchestrator | loadbalancer : Copying files for haproxy-ssh ---------------------------- 6.26s 2025-04-09 10:25:49.653439 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.86s 2025-04-09 10:25:49.653453 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 5.76s 2025-04-09 10:25:49.653489 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.70s 2025-04-09 10:25:49.653503 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.40s 2025-04-09 10:25:49.653518 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.37s 2025-04-09 10:25:49.653532 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.27s 2025-04-09 10:25:49.653546 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.19s 2025-04-09 10:25:49.653560 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.16s 2025-04-09 10:25:49.653574 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.97s 2025-04-09 10:25:49.653588 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.89s 2025-04-09 10:25:49.653604 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.85s 2025-04-09 10:25:49.653618 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.82s 2025-04-09 10:25:49.653633 | orchestrator | 2025-04-09 10:25:46 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:25:49.653648 | orchestrator | 2025-04-09 10:25:46 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:25:49.653662 | orchestrator | 2025-04-09 10:25:46 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:49.653696 | orchestrator | 2025-04-09 10:25:49 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:49.654928 | orchestrator | 2025-04-09 10:25:49 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:25:49.656875 | orchestrator | 2025-04-09 10:25:49 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:25:49.657076 | orchestrator | 2025-04-09 10:25:49 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:52.700539 | orchestrator | 2025-04-09 10:25:52 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:55.746578 | orchestrator | 2025-04-09 10:25:52 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:25:55.746688 | orchestrator | 2025-04-09 10:25:52 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:25:55.746706 | orchestrator | 2025-04-09 10:25:52 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:55.746738 | orchestrator | 2025-04-09 10:25:55 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:55.749734 | orchestrator | 2025-04-09 10:25:55 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:25:55.750441 | orchestrator | 2025-04-09 10:25:55 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:25:58.787890 | orchestrator | 2025-04-09 10:25:55 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:25:58.788034 | orchestrator | 2025-04-09 10:25:58 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:25:58.791612 | orchestrator | 2025-04-09 10:25:58 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:25:58.792529 | orchestrator | 2025-04-09 10:25:58 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:25:58.792735 | orchestrator | 2025-04-09 10:25:58 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:01.830554 | orchestrator | 2025-04-09 10:26:01 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:01.830831 | orchestrator | 2025-04-09 10:26:01 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:01.831448 | orchestrator | 2025-04-09 10:26:01 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:01.831668 | orchestrator | 2025-04-09 10:26:01 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:04.868787 | orchestrator | 2025-04-09 10:26:04 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:04.871833 | orchestrator | 2025-04-09 10:26:04 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:04.872714 | orchestrator | 2025-04-09 10:26:04 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:04.873116 | orchestrator | 2025-04-09 10:26:04 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:07.924166 | orchestrator | 2025-04-09 10:26:07 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:07.925215 | orchestrator | 2025-04-09 10:26:07 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:07.927413 | orchestrator | 2025-04-09 10:26:07 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:10.982978 | orchestrator | 2025-04-09 10:26:07 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:10.983108 | orchestrator | 2025-04-09 10:26:10 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:14.026850 | orchestrator | 2025-04-09 10:26:10 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:14.026942 | orchestrator | 2025-04-09 10:26:10 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:14.026956 | orchestrator | 2025-04-09 10:26:10 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:14.026982 | orchestrator | 2025-04-09 10:26:14 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:14.029174 | orchestrator | 2025-04-09 10:26:14 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:14.029191 | orchestrator | 2025-04-09 10:26:14 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:14.029206 | orchestrator | 2025-04-09 10:26:14 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:17.050686 | orchestrator | 2025-04-09 10:26:17 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:17.053285 | orchestrator | 2025-04-09 10:26:17 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:17.053711 | orchestrator | 2025-04-09 10:26:17 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:20.100136 | orchestrator | 2025-04-09 10:26:17 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:20.100265 | orchestrator | 2025-04-09 10:26:20 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:20.101780 | orchestrator | 2025-04-09 10:26:20 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:20.103573 | orchestrator | 2025-04-09 10:26:20 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:20.103886 | orchestrator | 2025-04-09 10:26:20 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:23.149889 | orchestrator | 2025-04-09 10:26:23 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:23.150725 | orchestrator | 2025-04-09 10:26:23 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:23.152068 | orchestrator | 2025-04-09 10:26:23 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:26.196281 | orchestrator | 2025-04-09 10:26:23 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:26.196535 | orchestrator | 2025-04-09 10:26:26 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:26.196570 | orchestrator | 2025-04-09 10:26:26 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:26.198288 | orchestrator | 2025-04-09 10:26:26 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:29.254003 | orchestrator | 2025-04-09 10:26:26 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:29.254186 | orchestrator | 2025-04-09 10:26:29 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:29.256000 | orchestrator | 2025-04-09 10:26:29 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:29.259935 | orchestrator | 2025-04-09 10:26:29 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:32.304892 | orchestrator | 2025-04-09 10:26:29 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:32.305036 | orchestrator | 2025-04-09 10:26:32 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:32.306447 | orchestrator | 2025-04-09 10:26:32 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:32.308763 | orchestrator | 2025-04-09 10:26:32 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:32.309323 | orchestrator | 2025-04-09 10:26:32 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:35.371098 | orchestrator | 2025-04-09 10:26:35 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:35.371930 | orchestrator | 2025-04-09 10:26:35 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:35.371974 | orchestrator | 2025-04-09 10:26:35 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:35.372134 | orchestrator | 2025-04-09 10:26:35 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:38.425164 | orchestrator | 2025-04-09 10:26:38 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:38.427781 | orchestrator | 2025-04-09 10:26:38 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:38.430297 | orchestrator | 2025-04-09 10:26:38 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:38.430526 | orchestrator | 2025-04-09 10:26:38 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:41.474251 | orchestrator | 2025-04-09 10:26:41 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:41.476322 | orchestrator | 2025-04-09 10:26:41 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:41.478232 | orchestrator | 2025-04-09 10:26:41 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:44.533692 | orchestrator | 2025-04-09 10:26:41 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:44.533823 | orchestrator | 2025-04-09 10:26:44 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:44.537705 | orchestrator | 2025-04-09 10:26:44 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:44.538818 | orchestrator | 2025-04-09 10:26:44 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:44.539231 | orchestrator | 2025-04-09 10:26:44 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:47.581529 | orchestrator | 2025-04-09 10:26:47 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:47.583537 | orchestrator | 2025-04-09 10:26:47 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:47.585783 | orchestrator | 2025-04-09 10:26:47 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:47.586244 | orchestrator | 2025-04-09 10:26:47 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:50.643547 | orchestrator | 2025-04-09 10:26:50 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:50.644263 | orchestrator | 2025-04-09 10:26:50 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:50.644420 | orchestrator | 2025-04-09 10:26:50 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:53.705608 | orchestrator | 2025-04-09 10:26:50 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:53.705743 | orchestrator | 2025-04-09 10:26:53 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:53.707307 | orchestrator | 2025-04-09 10:26:53 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:53.709623 | orchestrator | 2025-04-09 10:26:53 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:53.709786 | orchestrator | 2025-04-09 10:26:53 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:56.750705 | orchestrator | 2025-04-09 10:26:56 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:56.752590 | orchestrator | 2025-04-09 10:26:56 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:56.755231 | orchestrator | 2025-04-09 10:26:56 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:59.812351 | orchestrator | 2025-04-09 10:26:56 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:26:59.812531 | orchestrator | 2025-04-09 10:26:59 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:26:59.813571 | orchestrator | 2025-04-09 10:26:59 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:26:59.814442 | orchestrator | 2025-04-09 10:26:59 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:26:59.814739 | orchestrator | 2025-04-09 10:26:59 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:02.888792 | orchestrator | 2025-04-09 10:27:02 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:02.889882 | orchestrator | 2025-04-09 10:27:02 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:02.890489 | orchestrator | 2025-04-09 10:27:02 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:05.947501 | orchestrator | 2025-04-09 10:27:02 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:05.947638 | orchestrator | 2025-04-09 10:27:05 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:05.949327 | orchestrator | 2025-04-09 10:27:05 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:05.952215 | orchestrator | 2025-04-09 10:27:05 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:05.952632 | orchestrator | 2025-04-09 10:27:05 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:09.007005 | orchestrator | 2025-04-09 10:27:09 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:09.010852 | orchestrator | 2025-04-09 10:27:09 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:09.015298 | orchestrator | 2025-04-09 10:27:09 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:12.060520 | orchestrator | 2025-04-09 10:27:09 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:12.060647 | orchestrator | 2025-04-09 10:27:12 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:12.061774 | orchestrator | 2025-04-09 10:27:12 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:12.063013 | orchestrator | 2025-04-09 10:27:12 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:12.063180 | orchestrator | 2025-04-09 10:27:12 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:15.114010 | orchestrator | 2025-04-09 10:27:15 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:15.116212 | orchestrator | 2025-04-09 10:27:15 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:15.118080 | orchestrator | 2025-04-09 10:27:15 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:15.118316 | orchestrator | 2025-04-09 10:27:15 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:18.162493 | orchestrator | 2025-04-09 10:27:18 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:18.163609 | orchestrator | 2025-04-09 10:27:18 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:18.166105 | orchestrator | 2025-04-09 10:27:18 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:21.219839 | orchestrator | 2025-04-09 10:27:18 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:21.219965 | orchestrator | 2025-04-09 10:27:21 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:21.220336 | orchestrator | 2025-04-09 10:27:21 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:21.222007 | orchestrator | 2025-04-09 10:27:21 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:24.267000 | orchestrator | 2025-04-09 10:27:21 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:24.267135 | orchestrator | 2025-04-09 10:27:24 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:24.270147 | orchestrator | 2025-04-09 10:27:24 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:24.272025 | orchestrator | 2025-04-09 10:27:24 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:27.335657 | orchestrator | 2025-04-09 10:27:24 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:27.335784 | orchestrator | 2025-04-09 10:27:27 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:27.337176 | orchestrator | 2025-04-09 10:27:27 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:27.338407 | orchestrator | 2025-04-09 10:27:27 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:30.390760 | orchestrator | 2025-04-09 10:27:27 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:30.390904 | orchestrator | 2025-04-09 10:27:30 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:30.392315 | orchestrator | 2025-04-09 10:27:30 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:30.394658 | orchestrator | 2025-04-09 10:27:30 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:33.451206 | orchestrator | 2025-04-09 10:27:30 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:33.451343 | orchestrator | 2025-04-09 10:27:33 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:33.455256 | orchestrator | 2025-04-09 10:27:33 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:33.458816 | orchestrator | 2025-04-09 10:27:33 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:36.502957 | orchestrator | 2025-04-09 10:27:33 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:36.503102 | orchestrator | 2025-04-09 10:27:36 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:36.504735 | orchestrator | 2025-04-09 10:27:36 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:36.504779 | orchestrator | 2025-04-09 10:27:36 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:39.554630 | orchestrator | 2025-04-09 10:27:36 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:39.554753 | orchestrator | 2025-04-09 10:27:39 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:39.556480 | orchestrator | 2025-04-09 10:27:39 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:39.558487 | orchestrator | 2025-04-09 10:27:39 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:42.608331 | orchestrator | 2025-04-09 10:27:39 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:42.609150 | orchestrator | 2025-04-09 10:27:42 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:42.610145 | orchestrator | 2025-04-09 10:27:42 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:42.612354 | orchestrator | 2025-04-09 10:27:42 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:42.612717 | orchestrator | 2025-04-09 10:27:42 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:45.661145 | orchestrator | 2025-04-09 10:27:45 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:45.662986 | orchestrator | 2025-04-09 10:27:45 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:45.666070 | orchestrator | 2025-04-09 10:27:45 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:48.722554 | orchestrator | 2025-04-09 10:27:45 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:48.722675 | orchestrator | 2025-04-09 10:27:48 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:48.724094 | orchestrator | 2025-04-09 10:27:48 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:48.725819 | orchestrator | 2025-04-09 10:27:48 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:48.726270 | orchestrator | 2025-04-09 10:27:48 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:51.780596 | orchestrator | 2025-04-09 10:27:51 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:51.782155 | orchestrator | 2025-04-09 10:27:51 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:51.783705 | orchestrator | 2025-04-09 10:27:51 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:51.784352 | orchestrator | 2025-04-09 10:27:51 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:54.841798 | orchestrator | 2025-04-09 10:27:54 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:54.843527 | orchestrator | 2025-04-09 10:27:54 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:54.845108 | orchestrator | 2025-04-09 10:27:54 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:27:54.845466 | orchestrator | 2025-04-09 10:27:54 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:27:57.894260 | orchestrator | 2025-04-09 10:27:57 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:27:57.896697 | orchestrator | 2025-04-09 10:27:57 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:27:57.899230 | orchestrator | 2025-04-09 10:27:57 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:28:00.956940 | orchestrator | 2025-04-09 10:27:57 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:28:00.957065 | orchestrator | 2025-04-09 10:28:00 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:28:00.959482 | orchestrator | 2025-04-09 10:28:00 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:28:00.962003 | orchestrator | 2025-04-09 10:28:00 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:28:00.962671 | orchestrator | 2025-04-09 10:28:00 | INFO  | Wait 1 second(s) until the next check 2025-04-09 10:28:04.028762 | orchestrator | 2025-04-09 10:28:04 | INFO  | Task af1d28cb-f977-4973-a564-f31f8450abd9 is in state STARTED 2025-04-09 10:28:04.030510 | orchestrator | 2025-04-09 10:28:04 | INFO  | Task a670aeed-a0e3-48f7-97ca-5b2dea4f48e2 is in state STARTED 2025-04-09 10:28:04.034131 | orchestrator | 2025-04-09 10:28:04 | INFO  | Task 8175e80f-c715-48a6-a1e5-1304559415b5 is in state STARTED 2025-04-09 10:28:05.015424 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-09 10:28:05.021946 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-09 10:28:05.714227 | 2025-04-09 10:28:05.714393 | PLAY [Post output play] 2025-04-09 10:28:05.744193 | 2025-04-09 10:28:05.744317 | LOOP [stage-output : Register sources] 2025-04-09 10:28:05.824998 | 2025-04-09 10:28:05.825235 | TASK [stage-output : Check sudo] 2025-04-09 10:28:06.541750 | orchestrator | sudo: a password is required 2025-04-09 10:28:06.867083 | orchestrator | ok: Runtime: 0:00:00.015518 2025-04-09 10:28:06.875795 | 2025-04-09 10:28:06.875906 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-09 10:28:06.920655 | 2025-04-09 10:28:06.920872 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-09 10:28:07.018834 | orchestrator | ok 2025-04-09 10:28:07.028659 | 2025-04-09 10:28:07.028775 | LOOP [stage-output : Ensure target folders exist] 2025-04-09 10:28:07.484146 | orchestrator | ok: "docs" 2025-04-09 10:28:07.484556 | 2025-04-09 10:28:07.720360 | orchestrator | ok: "artifacts" 2025-04-09 10:28:07.939763 | orchestrator | ok: "logs" 2025-04-09 10:28:07.965551 | 2025-04-09 10:28:07.965719 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-09 10:28:08.010641 | 2025-04-09 10:28:08.011191 | TASK [stage-output : Make all log files readable] 2025-04-09 10:28:08.280126 | orchestrator | ok 2025-04-09 10:28:08.288049 | 2025-04-09 10:28:08.288164 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-09 10:28:08.344448 | orchestrator | skipping: Conditional result was False 2025-04-09 10:28:08.359378 | 2025-04-09 10:28:08.359515 | TASK [stage-output : Discover log files for compression] 2025-04-09 10:28:08.384561 | orchestrator | skipping: Conditional result was False 2025-04-09 10:28:08.400216 | 2025-04-09 10:28:08.400356 | LOOP [stage-output : Archive everything from logs] 2025-04-09 10:28:08.489558 | 2025-04-09 10:28:08.489709 | PLAY [Post cleanup play] 2025-04-09 10:28:08.513142 | 2025-04-09 10:28:08.513241 | TASK [Set cloud fact (Zuul deployment)] 2025-04-09 10:28:08.580229 | orchestrator | ok 2025-04-09 10:28:08.591687 | 2025-04-09 10:28:08.591793 | TASK [Set cloud fact (local deployment)] 2025-04-09 10:28:08.626060 | orchestrator | skipping: Conditional result was False 2025-04-09 10:28:08.637442 | 2025-04-09 10:28:08.637561 | TASK [Clean the cloud environment] 2025-04-09 10:28:09.274736 | orchestrator | 2025-04-09 10:28:09 - clean up servers 2025-04-09 10:28:10.113777 | orchestrator | 2025-04-09 10:28:10 - testbed-manager 2025-04-09 10:28:10.196069 | orchestrator | 2025-04-09 10:28:10 - testbed-node-4 2025-04-09 10:28:10.308040 | orchestrator | 2025-04-09 10:28:10 - testbed-node-1 2025-04-09 10:28:10.408340 | orchestrator | 2025-04-09 10:28:10 - testbed-node-2 2025-04-09 10:28:10.504712 | orchestrator | 2025-04-09 10:28:10 - testbed-node-0 2025-04-09 10:28:10.607680 | orchestrator | 2025-04-09 10:28:10 - testbed-node-5 2025-04-09 10:28:10.710486 | orchestrator | 2025-04-09 10:28:10 - testbed-node-3 2025-04-09 10:28:10.797538 | orchestrator | 2025-04-09 10:28:10 - clean up keypairs 2025-04-09 10:28:10.813029 | orchestrator | 2025-04-09 10:28:10 - testbed 2025-04-09 10:28:10.840921 | orchestrator | 2025-04-09 10:28:10 - wait for servers to be gone 2025-04-09 10:28:22.029957 | orchestrator | 2025-04-09 10:28:22 - clean up ports 2025-04-09 10:28:22.241072 | orchestrator | 2025-04-09 10:28:22 - 10c5e59c-962c-4bd3-a683-38844b2ca53c 2025-04-09 10:28:22.435467 | orchestrator | 2025-04-09 10:28:22 - 12d3a9b4-a968-4e0f-8cf5-355c19b25367 2025-04-09 10:28:22.873505 | orchestrator | 2025-04-09 10:28:22 - 1e2828a1-47ad-427f-8127-4fd2fdccbbda 2025-04-09 10:28:23.097766 | orchestrator | 2025-04-09 10:28:23 - 250162e4-45ec-42b4-9c5e-00b2748cf7d6 2025-04-09 10:28:23.284921 | orchestrator | 2025-04-09 10:28:23 - 3b982359-a3c9-43ea-abda-9a370441abb1 2025-04-09 10:28:23.480295 | orchestrator | 2025-04-09 10:28:23 - 91ebb0c5-02bd-49da-9dde-7f78a3d3ef27 2025-04-09 10:28:23.659106 | orchestrator | 2025-04-09 10:28:23 - d8b85722-36e3-418e-9ec2-210980647918 2025-04-09 10:28:23.850615 | orchestrator | 2025-04-09 10:28:23 - clean up volumes 2025-04-09 10:28:23.996152 | orchestrator | 2025-04-09 10:28:23 - testbed-volume-3-node-base 2025-04-09 10:28:24.036967 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-2-node-base 2025-04-09 10:28:24.082923 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-manager-base 2025-04-09 10:28:24.121408 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-0-node-base 2025-04-09 10:28:24.160745 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-1-node-base 2025-04-09 10:28:24.205880 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-5-node-base 2025-04-09 10:28:24.243665 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-7-node-1 2025-04-09 10:28:24.283593 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-12-node-0 2025-04-09 10:28:24.323835 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-3-node-3 2025-04-09 10:28:24.364077 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-11-node-5 2025-04-09 10:28:24.403194 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-15-node-3 2025-04-09 10:28:24.441485 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-14-node-2 2025-04-09 10:28:24.478990 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-9-node-3 2025-04-09 10:28:24.515913 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-4-node-base 2025-04-09 10:28:24.553107 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-16-node-4 2025-04-09 10:28:24.588284 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-1-node-1 2025-04-09 10:28:24.623074 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-6-node-0 2025-04-09 10:28:24.663033 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-17-node-5 2025-04-09 10:28:24.703842 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-4-node-4 2025-04-09 10:28:24.746834 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-2-node-2 2025-04-09 10:28:24.787154 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-8-node-2 2025-04-09 10:28:24.828822 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-0-node-0 2025-04-09 10:28:24.869346 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-5-node-5 2025-04-09 10:28:24.912756 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-13-node-1 2025-04-09 10:28:24.951013 | orchestrator | 2025-04-09 10:28:24 - testbed-volume-10-node-4 2025-04-09 10:28:24.992077 | orchestrator | 2025-04-09 10:28:24 - disconnect routers 2025-04-09 10:28:25.097088 | orchestrator | 2025-04-09 10:28:25 - testbed 2025-04-09 10:28:26.462648 | orchestrator | 2025-04-09 10:28:26 - clean up subnets 2025-04-09 10:28:26.501899 | orchestrator | 2025-04-09 10:28:26 - subnet-testbed-management 2025-04-09 10:28:26.640505 | orchestrator | 2025-04-09 10:28:26 - clean up networks 2025-04-09 10:28:26.827328 | orchestrator | 2025-04-09 10:28:26 - net-testbed-management 2025-04-09 10:28:27.087476 | orchestrator | 2025-04-09 10:28:27 - clean up security groups 2025-04-09 10:28:27.120194 | orchestrator | 2025-04-09 10:28:27 - testbed-management 2025-04-09 10:28:27.212640 | orchestrator | 2025-04-09 10:28:27 - testbed-node 2025-04-09 10:28:27.312140 | orchestrator | 2025-04-09 10:28:27 - clean up floating ips 2025-04-09 10:28:27.343301 | orchestrator | 2025-04-09 10:28:27 - 81.163.193.169 2025-04-09 10:28:27.764821 | orchestrator | 2025-04-09 10:28:27 - clean up routers 2025-04-09 10:28:27.819452 | orchestrator | 2025-04-09 10:28:27 - testbed 2025-04-09 10:28:28.692471 | orchestrator | changed 2025-04-09 10:28:28.734741 | 2025-04-09 10:28:28.734827 | PLAY RECAP 2025-04-09 10:28:28.734881 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-09 10:28:28.734905 | 2025-04-09 10:28:28.853340 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-09 10:28:28.856301 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-09 10:28:29.572899 | 2025-04-09 10:28:29.573083 | PLAY [Base post-fetch] 2025-04-09 10:28:29.603127 | 2025-04-09 10:28:29.603265 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-09 10:28:29.671847 | orchestrator | skipping: Conditional result was False 2025-04-09 10:28:29.686712 | 2025-04-09 10:28:29.686877 | TASK [fetch-output : Set log path for single node] 2025-04-09 10:28:29.740069 | orchestrator | ok 2025-04-09 10:28:29.749576 | 2025-04-09 10:28:29.749701 | LOOP [fetch-output : Ensure local output dirs] 2025-04-09 10:28:30.208215 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/work/logs" 2025-04-09 10:28:30.499855 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/work/artifacts" 2025-04-09 10:28:30.780863 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0c6e366059cb4f5fa89c10fa6f1e315d/work/docs" 2025-04-09 10:28:30.807331 | 2025-04-09 10:28:30.807525 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-09 10:28:31.637617 | orchestrator | changed: .d..t...... ./ 2025-04-09 10:28:31.638073 | orchestrator | changed: All items complete 2025-04-09 10:28:31.638168 | 2025-04-09 10:28:32.235028 | orchestrator | changed: .d..t...... ./ 2025-04-09 10:28:32.815130 | orchestrator | changed: .d..t...... ./ 2025-04-09 10:28:32.853330 | 2025-04-09 10:28:32.853469 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-09 10:28:32.896883 | orchestrator | skipping: Conditional result was False 2025-04-09 10:28:32.903703 | orchestrator | skipping: Conditional result was False 2025-04-09 10:28:32.953700 | 2025-04-09 10:28:32.953792 | PLAY RECAP 2025-04-09 10:28:32.953845 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-09 10:28:32.953873 | 2025-04-09 10:28:33.073773 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-09 10:28:33.081357 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-09 10:28:33.760139 | 2025-04-09 10:28:33.760286 | PLAY [Base post] 2025-04-09 10:28:33.788582 | 2025-04-09 10:28:33.788704 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-09 10:28:34.564025 | orchestrator | changed 2025-04-09 10:28:34.601659 | 2025-04-09 10:28:34.601772 | PLAY RECAP 2025-04-09 10:28:34.601836 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-09 10:28:34.601899 | 2025-04-09 10:28:34.707261 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-09 10:28:34.715019 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-09 10:28:35.435732 | 2025-04-09 10:28:35.435877 | PLAY [Base post-logs] 2025-04-09 10:28:35.452053 | 2025-04-09 10:28:35.452184 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-09 10:28:35.913576 | localhost | changed 2025-04-09 10:28:35.920306 | 2025-04-09 10:28:35.920523 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-09 10:28:35.953487 | localhost | ok 2025-04-09 10:28:35.963705 | 2025-04-09 10:28:35.963851 | TASK [Set zuul-log-path fact] 2025-04-09 10:28:35.984037 | localhost | ok 2025-04-09 10:28:35.996265 | 2025-04-09 10:28:35.996374 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-09 10:28:36.037856 | localhost | ok 2025-04-09 10:28:36.047934 | 2025-04-09 10:28:36.048104 | TASK [upload-logs : Create log directories] 2025-04-09 10:28:36.553185 | localhost | changed 2025-04-09 10:28:36.561156 | 2025-04-09 10:28:36.561304 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-09 10:28:37.088453 | localhost -> localhost | ok: Runtime: 0:00:00.007125 2025-04-09 10:28:37.097490 | 2025-04-09 10:28:37.097653 | TASK [upload-logs : Upload logs to log server] 2025-04-09 10:28:37.673352 | localhost | Output suppressed because no_log was given 2025-04-09 10:28:37.676508 | 2025-04-09 10:28:37.676630 | LOOP [upload-logs : Compress console log and json output] 2025-04-09 10:28:37.748003 | localhost | skipping: Conditional result was False 2025-04-09 10:28:37.764799 | localhost | skipping: Conditional result was False 2025-04-09 10:28:37.781511 | 2025-04-09 10:28:37.781694 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-09 10:28:37.843627 | localhost | skipping: Conditional result was False 2025-04-09 10:28:37.843947 | 2025-04-09 10:28:37.856638 | localhost | skipping: Conditional result was False 2025-04-09 10:28:37.865049 | 2025-04-09 10:28:37.865159 | LOOP [upload-logs : Upload console log and json output]