2025-07-28 00:00:07.455375 | Job console starting 2025-07-28 00:00:07.477767 | Updating git repos 2025-07-28 00:00:07.568120 | Cloning repos into workspace 2025-07-28 00:00:07.859435 | Restoring repo states 2025-07-28 00:00:07.882510 | Merging changes 2025-07-28 00:00:07.882531 | Checking out repos 2025-07-28 00:00:08.447166 | Preparing playbooks 2025-07-28 00:00:09.197676 | Running Ansible setup 2025-07-28 00:00:14.223154 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-28 00:00:15.322650 | 2025-07-28 00:00:15.322761 | PLAY [Base pre] 2025-07-28 00:00:15.356063 | 2025-07-28 00:00:15.356178 | TASK [Setup log path fact] 2025-07-28 00:00:15.387161 | orchestrator | ok 2025-07-28 00:00:15.408843 | 2025-07-28 00:00:15.408955 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-28 00:00:15.446176 | orchestrator | ok 2025-07-28 00:00:15.457549 | 2025-07-28 00:00:15.457641 | TASK [emit-job-header : Print job information] 2025-07-28 00:00:15.527776 | # Job Information 2025-07-28 00:00:15.527905 | Ansible Version: 2.16.14 2025-07-28 00:00:15.527933 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-07-28 00:00:15.527961 | Pipeline: periodic-midnight 2025-07-28 00:00:15.527980 | Executor: 521e9411259a 2025-07-28 00:00:15.527997 | Triggered by: https://github.com/osism/testbed 2025-07-28 00:00:15.528015 | Event ID: 992724d138f84735b5f7c4c42ac9fee7 2025-07-28 00:00:15.533386 | 2025-07-28 00:00:15.533464 | LOOP [emit-job-header : Print node information] 2025-07-28 00:00:15.707533 | orchestrator | ok: 2025-07-28 00:00:15.707659 | orchestrator | # Node Information 2025-07-28 00:00:15.707687 | orchestrator | Inventory Hostname: orchestrator 2025-07-28 00:00:15.707708 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-28 00:00:15.707727 | orchestrator | Username: zuul-testbed01 2025-07-28 00:00:15.707744 | orchestrator | Distro: Debian 12.11 2025-07-28 00:00:15.707763 | orchestrator | Provider: static-testbed 2025-07-28 00:00:15.707780 | orchestrator | Region: 2025-07-28 00:00:15.707798 | orchestrator | Label: testbed-orchestrator 2025-07-28 00:00:15.707814 | orchestrator | Product Name: OpenStack Nova 2025-07-28 00:00:15.707830 | orchestrator | Interface IP: 81.163.193.140 2025-07-28 00:00:15.723548 | 2025-07-28 00:00:15.723638 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-28 00:00:16.751042 | orchestrator -> localhost | changed 2025-07-28 00:00:16.757240 | 2025-07-28 00:00:16.757321 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-28 00:00:19.256549 | orchestrator -> localhost | changed 2025-07-28 00:00:19.267641 | 2025-07-28 00:00:19.267733 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-28 00:00:20.255564 | orchestrator -> localhost | ok 2025-07-28 00:00:20.261115 | 2025-07-28 00:00:20.261204 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-28 00:00:20.298321 | orchestrator | ok 2025-07-28 00:00:20.332882 | orchestrator | included: /var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-28 00:00:20.351806 | 2025-07-28 00:00:20.351904 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-28 00:00:21.992351 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-28 00:00:21.993495 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/work/469a9d5ea7f2474388e6b39baa9a81ff_id_rsa 2025-07-28 00:00:21.993566 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/work/469a9d5ea7f2474388e6b39baa9a81ff_id_rsa.pub 2025-07-28 00:00:21.993598 | orchestrator -> localhost | The key fingerprint is: 2025-07-28 00:00:21.993630 | orchestrator -> localhost | SHA256:1fNtiYqfk1MWU0JxafurHOgR851grFi//5zyL95MkeM zuul-build-sshkey 2025-07-28 00:00:21.993655 | orchestrator -> localhost | The key's randomart image is: 2025-07-28 00:00:21.993687 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-28 00:00:21.993710 | orchestrator -> localhost | | .o...| 2025-07-28 00:00:21.993733 | orchestrator -> localhost | | . ..+ | 2025-07-28 00:00:21.993753 | orchestrator -> localhost | | . o + .| 2025-07-28 00:00:21.993772 | orchestrator -> localhost | | . .=.oo| 2025-07-28 00:00:21.993793 | orchestrator -> localhost | | S + =+=+| 2025-07-28 00:00:21.993818 | orchestrator -> localhost | | + Xo+.=| 2025-07-28 00:00:21.993840 | orchestrator -> localhost | | o =++ Eo| 2025-07-28 00:00:21.993860 | orchestrator -> localhost | | o++.+*.| 2025-07-28 00:00:21.993882 | orchestrator -> localhost | | +o=*=O| 2025-07-28 00:00:21.993903 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-28 00:00:21.993954 | orchestrator -> localhost | ok: Runtime: 0:00:00.415265 2025-07-28 00:00:22.001335 | 2025-07-28 00:00:22.001430 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-28 00:00:22.030119 | orchestrator | ok 2025-07-28 00:00:22.052020 | orchestrator | included: /var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-28 00:00:22.073582 | 2025-07-28 00:00:22.073697 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-28 00:00:22.117160 | orchestrator | skipping: Conditional result was False 2025-07-28 00:00:22.124726 | 2025-07-28 00:00:22.124839 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-28 00:00:22.712854 | orchestrator | changed 2025-07-28 00:00:22.728588 | 2025-07-28 00:00:22.728691 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-28 00:00:23.029802 | orchestrator | ok 2025-07-28 00:00:23.035928 | 2025-07-28 00:00:23.036028 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-28 00:00:23.708411 | orchestrator | ok 2025-07-28 00:00:23.714182 | 2025-07-28 00:00:23.714281 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-28 00:00:24.183944 | orchestrator | ok 2025-07-28 00:00:24.189719 | 2025-07-28 00:00:24.189807 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-28 00:00:24.231248 | orchestrator | skipping: Conditional result was False 2025-07-28 00:00:24.238459 | 2025-07-28 00:00:24.238557 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-28 00:00:25.089722 | orchestrator -> localhost | changed 2025-07-28 00:00:25.107390 | 2025-07-28 00:00:25.107501 | TASK [add-build-sshkey : Add back temp key] 2025-07-28 00:00:25.944726 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/work/469a9d5ea7f2474388e6b39baa9a81ff_id_rsa (zuul-build-sshkey) 2025-07-28 00:00:25.944897 | orchestrator -> localhost | ok: Runtime: 0:00:00.018703 2025-07-28 00:00:25.950623 | 2025-07-28 00:00:25.950708 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-28 00:00:26.516885 | orchestrator | ok 2025-07-28 00:00:26.522436 | 2025-07-28 00:00:26.522527 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-28 00:00:26.586330 | orchestrator | skipping: Conditional result was False 2025-07-28 00:00:26.689880 | 2025-07-28 00:00:26.689979 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-28 00:00:27.194520 | orchestrator | ok 2025-07-28 00:00:27.217708 | 2025-07-28 00:00:27.217811 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-28 00:00:27.270803 | orchestrator | ok 2025-07-28 00:00:27.283980 | 2025-07-28 00:00:27.284074 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-28 00:00:27.781503 | orchestrator -> localhost | ok 2025-07-28 00:00:27.787471 | 2025-07-28 00:00:27.787554 | TASK [validate-host : Collect information about the host] 2025-07-28 00:00:30.352527 | orchestrator | ok 2025-07-28 00:00:30.371681 | 2025-07-28 00:00:30.371802 | TASK [validate-host : Sanitize hostname] 2025-07-28 00:00:30.449307 | orchestrator | ok 2025-07-28 00:00:30.454374 | 2025-07-28 00:00:30.454467 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-28 00:00:31.369027 | orchestrator -> localhost | changed 2025-07-28 00:00:31.374595 | 2025-07-28 00:00:31.374681 | TASK [validate-host : Collect information about zuul worker] 2025-07-28 00:00:31.929565 | orchestrator | ok 2025-07-28 00:00:31.933746 | 2025-07-28 00:00:31.933828 | TASK [validate-host : Write out all zuul information for each host] 2025-07-28 00:00:32.623121 | orchestrator -> localhost | changed 2025-07-28 00:00:32.632223 | 2025-07-28 00:00:32.632315 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-28 00:00:32.922362 | orchestrator | ok 2025-07-28 00:00:32.927311 | 2025-07-28 00:00:32.927402 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-28 00:01:09.847942 | orchestrator | changed: 2025-07-28 00:01:09.848164 | orchestrator | .d..t...... src/ 2025-07-28 00:01:09.848200 | orchestrator | .d..t...... src/github.com/ 2025-07-28 00:01:09.848244 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-28 00:01:09.848266 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-28 00:01:09.848289 | orchestrator | RedHat.yml 2025-07-28 00:01:09.862951 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-28 00:01:09.862969 | orchestrator | RedHat.yml 2025-07-28 00:01:09.863021 | orchestrator | = 2.2.0"... 2025-07-28 00:01:24.150213 | orchestrator | 00:01:24.149 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-28 00:01:24.178496 | orchestrator | 00:01:24.178 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-07-28 00:01:25.018575 | orchestrator | 00:01:25.018 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-28 00:01:25.718290 | orchestrator | 00:01:25.718 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-28 00:01:25.971492 | orchestrator | 00:01:25.971 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-28 00:01:26.434385 | orchestrator | 00:01:26.434 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-28 00:01:26.830885 | orchestrator | 00:01:26.830 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-07-28 00:01:27.677618 | orchestrator | 00:01:27.676 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-07-28 00:01:27.677841 | orchestrator | 00:01:27.676 STDOUT terraform: Providers are signed by their developers. 2025-07-28 00:01:27.677874 | orchestrator | 00:01:27.676 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-28 00:01:27.677890 | orchestrator | 00:01:27.676 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-28 00:01:27.677903 | orchestrator | 00:01:27.676 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-28 00:01:27.677927 | orchestrator | 00:01:27.676 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-28 00:01:27.677961 | orchestrator | 00:01:27.676 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-28 00:01:27.677966 | orchestrator | 00:01:27.676 STDOUT terraform: you run "tofu init" in the future. 2025-07-28 00:01:27.677980 | orchestrator | 00:01:27.676 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-28 00:01:27.677985 | orchestrator | 00:01:27.676 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-28 00:01:27.678000 | orchestrator | 00:01:27.676 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-28 00:01:27.678006 | orchestrator | 00:01:27.676 STDOUT terraform: should now work. 2025-07-28 00:01:27.678056 | orchestrator | 00:01:27.676 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-28 00:01:27.678063 | orchestrator | 00:01:27.677 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-28 00:01:27.678068 | orchestrator | 00:01:27.677 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-28 00:01:27.792823 | orchestrator | 00:01:27.791 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-28 00:01:27.792890 | orchestrator | 00:01:27.791 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-28 00:01:28.005874 | orchestrator | 00:01:28.005 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-28 00:01:28.005957 | orchestrator | 00:01:28.005 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-28 00:01:28.005968 | orchestrator | 00:01:28.005 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-28 00:01:28.005973 | orchestrator | 00:01:28.005 STDOUT terraform: for this configuration. 2025-07-28 00:01:28.148359 | orchestrator | 00:01:28.148 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-28 00:01:28.148477 | orchestrator | 00:01:28.148 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-28 00:01:28.316117 | orchestrator | 00:01:28.315 STDOUT terraform: ci.auto.tfvars 2025-07-28 00:01:28.323872 | orchestrator | 00:01:28.323 STDOUT terraform: default_custom.tf 2025-07-28 00:01:28.485154 | orchestrator | 00:01:28.485 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-28 00:01:29.584920 | orchestrator | 00:01:29.584 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-28 00:01:30.132489 | orchestrator | 00:01:30.132 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-28 00:01:30.497863 | orchestrator | 00:01:30.497 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-28 00:01:30.498133 | orchestrator | 00:01:30.497 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-28 00:01:30.498146 | orchestrator | 00:01:30.497 STDOUT terraform:  + create 2025-07-28 00:01:30.498153 | orchestrator | 00:01:30.497 STDOUT terraform:  <= read (data resources) 2025-07-28 00:01:30.498159 | orchestrator | 00:01:30.497 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-28 00:01:30.498163 | orchestrator | 00:01:30.497 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-28 00:01:30.498170 | orchestrator | 00:01:30.498 STDOUT terraform:  # (config refers to values not yet known) 2025-07-28 00:01:30.498214 | orchestrator | 00:01:30.498 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-28 00:01:30.498303 | orchestrator | 00:01:30.498 STDOUT terraform:  + checksum = (known after apply) 2025-07-28 00:01:30.498336 | orchestrator | 00:01:30.498 STDOUT terraform:  + created_at = (known after apply) 2025-07-28 00:01:30.498395 | orchestrator | 00:01:30.498 STDOUT terraform:  + file = (known after apply) 2025-07-28 00:01:30.498463 | orchestrator | 00:01:30.498 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.498564 | orchestrator | 00:01:30.498 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.498613 | orchestrator | 00:01:30.498 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-28 00:01:30.498673 | orchestrator | 00:01:30.498 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-28 00:01:30.498738 | orchestrator | 00:01:30.498 STDOUT terraform:  + most_recent = true 2025-07-28 00:01:30.498806 | orchestrator | 00:01:30.498 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.498852 | orchestrator | 00:01:30.498 STDOUT terraform:  + protected = (known after apply) 2025-07-28 00:01:30.498910 | orchestrator | 00:01:30.498 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.498968 | orchestrator | 00:01:30.498 STDOUT terraform:  + schema = (known after apply) 2025-07-28 00:01:30.499033 | orchestrator | 00:01:30.498 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-28 00:01:30.499084 | orchestrator | 00:01:30.499 STDOUT terraform:  + tags = (known after apply) 2025-07-28 00:01:30.499174 | orchestrator | 00:01:30.499 STDOUT terraform:  + updated_at = (known after apply) 2025-07-28 00:01:30.499212 | orchestrator | 00:01:30.499 STDOUT terraform:  } 2025-07-28 00:01:30.499306 | orchestrator | 00:01:30.499 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-28 00:01:30.499369 | orchestrator | 00:01:30.499 STDOUT terraform:  # (config refers to values not yet known) 2025-07-28 00:01:30.499468 | orchestrator | 00:01:30.499 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-28 00:01:30.499563 | orchestrator | 00:01:30.499 STDOUT terraform:  + checksum = (known after apply) 2025-07-28 00:01:30.499655 | orchestrator | 00:01:30.499 STDOUT terraform:  + created_at = (known after apply) 2025-07-28 00:01:30.499785 | orchestrator | 00:01:30.499 STDOUT terraform:  + file = (known after apply) 2025-07-28 00:01:30.499847 | orchestrator | 00:01:30.499 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.499905 | orchestrator | 00:01:30.499 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.499986 | orchestrator | 00:01:30.499 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-28 00:01:30.500029 | orchestrator | 00:01:30.499 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-28 00:01:30.500088 | orchestrator | 00:01:30.500 STDOUT terraform:  + most_recent = true 2025-07-28 00:01:30.500118 | orchestrator | 00:01:30.500 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.500164 | orchestrator | 00:01:30.500 STDOUT terraform:  + protected = (known after apply) 2025-07-28 00:01:30.500207 | orchestrator | 00:01:30.500 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.500258 | orchestrator | 00:01:30.500 STDOUT terraform:  + schema = (known after apply) 2025-07-28 00:01:30.500304 | orchestrator | 00:01:30.500 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-28 00:01:30.500352 | orchestrator | 00:01:30.500 STDOUT terraform:  + tags = (known after apply) 2025-07-28 00:01:30.500417 | orchestrator | 00:01:30.500 STDOUT terraform:  + updated_at = (known after apply) 2025-07-28 00:01:30.500423 | orchestrator | 00:01:30.500 STDOUT terraform:  } 2025-07-28 00:01:30.500465 | orchestrator | 00:01:30.500 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-28 00:01:30.500516 | orchestrator | 00:01:30.500 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-28 00:01:30.500587 | orchestrator | 00:01:30.500 STDOUT terraform:  + content = (known after apply) 2025-07-28 00:01:30.500639 | orchestrator | 00:01:30.500 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-28 00:01:30.500715 | orchestrator | 00:01:30.500 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-28 00:01:30.500775 | orchestrator | 00:01:30.500 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-28 00:01:30.500844 | orchestrator | 00:01:30.500 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-28 00:01:30.500893 | orchestrator | 00:01:30.500 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-28 00:01:30.500955 | orchestrator | 00:01:30.500 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-28 00:01:30.501013 | orchestrator | 00:01:30.500 STDOUT terraform:  + directory_permission = "0777" 2025-07-28 00:01:30.501035 | orchestrator | 00:01:30.500 STDOUT terraform:  + file_permission = "0644" 2025-07-28 00:01:30.501097 | orchestrator | 00:01:30.501 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-28 00:01:30.501151 | orchestrator | 00:01:30.501 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.501181 | orchestrator | 00:01:30.501 STDOUT terraform:  } 2025-07-28 00:01:30.501222 | orchestrator | 00:01:30.501 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-28 00:01:30.501271 | orchestrator | 00:01:30.501 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-28 00:01:30.501357 | orchestrator | 00:01:30.501 STDOUT terraform:  + content = (known after apply) 2025-07-28 00:01:30.501400 | orchestrator | 00:01:30.501 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-28 00:01:30.501460 | orchestrator | 00:01:30.501 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-28 00:01:30.501523 | orchestrator | 00:01:30.501 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-28 00:01:30.501578 | orchestrator | 00:01:30.501 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-28 00:01:30.501646 | orchestrator | 00:01:30.501 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-28 00:01:30.501726 | orchestrator | 00:01:30.501 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-28 00:01:30.501775 | orchestrator | 00:01:30.501 STDOUT terraform:  + directory_permission = "0777" 2025-07-28 00:01:30.501806 | orchestrator | 00:01:30.501 STDOUT terraform:  + file_permission = "0644" 2025-07-28 00:01:30.501864 | orchestrator | 00:01:30.501 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-28 00:01:30.501946 | orchestrator | 00:01:30.501 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.501951 | orchestrator | 00:01:30.501 STDOUT terraform:  } 2025-07-28 00:01:30.502231 | orchestrator | 00:01:30.502 STDOUT terraform:  # local_file.inventory will be created 2025-07-28 00:01:30.502242 | orchestrator | 00:01:30.502 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-28 00:01:30.502627 | orchestrator | 00:01:30.502 STDOUT terraform:  + content = (known after apply) 2025-07-28 00:01:30.502892 | orchestrator | 00:01:30.502 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-28 00:01:30.503774 | orchestrator | 00:01:30.502 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-28 00:01:30.503994 | orchestrator | 00:01:30.503 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-28 00:01:30.504146 | orchestrator | 00:01:30.503 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-28 00:01:30.504492 | orchestrator | 00:01:30.504 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-28 00:01:30.504944 | orchestrator | 00:01:30.504 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-28 00:01:30.505027 | orchestrator | 00:01:30.504 STDOUT terraform:  + directory_permission = "0777" 2025-07-28 00:01:30.505148 | orchestrator | 00:01:30.504 STDOUT terraform:  + file_permission = "0644" 2025-07-28 00:01:30.505371 | orchestrator | 00:01:30.505 STDOUT terraform:  + filename = "inventory.ci" 2025-07-28 00:01:30.505574 | orchestrator | 00:01:30.505 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.505653 | orchestrator | 00:01:30.505 STDOUT terraform:  } 2025-07-28 00:01:30.505876 | orchestrator | 00:01:30.505 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-28 00:01:30.505961 | orchestrator | 00:01:30.505 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-28 00:01:30.506409 | orchestrator | 00:01:30.505 STDOUT terraform:  + content = (sensitive value) 2025-07-28 00:01:30.506710 | orchestrator | 00:01:30.506 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-28 00:01:30.506750 | orchestrator | 00:01:30.506 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-28 00:01:30.506801 | orchestrator | 00:01:30.506 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-28 00:01:30.506848 | orchestrator | 00:01:30.506 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-28 00:01:30.506920 | orchestrator | 00:01:30.506 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-28 00:01:30.506955 | orchestrator | 00:01:30.506 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-28 00:01:30.506984 | orchestrator | 00:01:30.506 STDOUT terraform:  + directory_permission = "0700" 2025-07-28 00:01:30.507027 | orchestrator | 00:01:30.506 STDOUT terraform:  + file_permission = "0600" 2025-07-28 00:01:30.507103 | orchestrator | 00:01:30.507 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-28 00:01:30.507111 | orchestrator | 00:01:30.507 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.507117 | orchestrator | 00:01:30.507 STDOUT terraform:  } 2025-07-28 00:01:30.507163 | orchestrator | 00:01:30.507 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-28 00:01:30.507253 | orchestrator | 00:01:30.507 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-28 00:01:30.507259 | orchestrator | 00:01:30.507 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.507263 | orchestrator | 00:01:30.507 STDOUT terraform:  } 2025-07-28 00:01:30.507312 | orchestrator | 00:01:30.507 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-28 00:01:30.507395 | orchestrator | 00:01:30.507 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-28 00:01:30.507463 | orchestrator | 00:01:30.507 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.507469 | orchestrator | 00:01:30.507 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.507513 | orchestrator | 00:01:30.507 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.507555 | orchestrator | 00:01:30.507 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.507644 | orchestrator | 00:01:30.507 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.507652 | orchestrator | 00:01:30.507 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-28 00:01:30.507732 | orchestrator | 00:01:30.507 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.507772 | orchestrator | 00:01:30.507 STDOUT terraform:  + size = 80 2025-07-28 00:01:30.507808 | orchestrator | 00:01:30.507 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.507837 | orchestrator | 00:01:30.507 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.507844 | orchestrator | 00:01:30.507 STDOUT terraform:  } 2025-07-28 00:01:30.507968 | orchestrator | 00:01:30.507 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-28 00:01:30.508075 | orchestrator | 00:01:30.507 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-28 00:01:30.508158 | orchestrator | 00:01:30.508 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.508165 | orchestrator | 00:01:30.508 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.508209 | orchestrator | 00:01:30.508 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.508248 | orchestrator | 00:01:30.508 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.508298 | orchestrator | 00:01:30.508 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.508368 | orchestrator | 00:01:30.508 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-28 00:01:30.508414 | orchestrator | 00:01:30.508 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.508425 | orchestrator | 00:01:30.508 STDOUT terraform:  + size = 80 2025-07-28 00:01:30.508462 | orchestrator | 00:01:30.508 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.508492 | orchestrator | 00:01:30.508 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.508567 | orchestrator | 00:01:30.508 STDOUT terraform:  } 2025-07-28 00:01:30.508572 | orchestrator | 00:01:30.508 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-28 00:01:30.508657 | orchestrator | 00:01:30.508 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-28 00:01:30.508734 | orchestrator | 00:01:30.508 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.508753 | orchestrator | 00:01:30.508 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.508801 | orchestrator | 00:01:30.508 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.508875 | orchestrator | 00:01:30.508 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.508888 | orchestrator | 00:01:30.508 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.508971 | orchestrator | 00:01:30.508 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-28 00:01:30.509029 | orchestrator | 00:01:30.508 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.509041 | orchestrator | 00:01:30.509 STDOUT terraform:  + size = 80 2025-07-28 00:01:30.509088 | orchestrator | 00:01:30.509 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.509099 | orchestrator | 00:01:30.509 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.509120 | orchestrator | 00:01:30.509 STDOUT terraform:  } 2025-07-28 00:01:30.509192 | orchestrator | 00:01:30.509 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-28 00:01:30.509266 | orchestrator | 00:01:30.509 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-28 00:01:30.509286 | orchestrator | 00:01:30.509 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.509348 | orchestrator | 00:01:30.509 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.509356 | orchestrator | 00:01:30.509 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.509403 | orchestrator | 00:01:30.509 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.509483 | orchestrator | 00:01:30.509 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.509495 | orchestrator | 00:01:30.509 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-28 00:01:30.509551 | orchestrator | 00:01:30.509 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.509564 | orchestrator | 00:01:30.509 STDOUT terraform:  + size = 80 2025-07-28 00:01:30.509608 | orchestrator | 00:01:30.509 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.509615 | orchestrator | 00:01:30.509 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.509625 | orchestrator | 00:01:30.509 STDOUT terraform:  } 2025-07-28 00:01:30.509717 | orchestrator | 00:01:30.509 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-28 00:01:30.509788 | orchestrator | 00:01:30.509 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-28 00:01:30.509799 | orchestrator | 00:01:30.509 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.509833 | orchestrator | 00:01:30.509 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.509877 | orchestrator | 00:01:30.509 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.509948 | orchestrator | 00:01:30.509 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.509966 | orchestrator | 00:01:30.509 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.510031 | orchestrator | 00:01:30.509 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-28 00:01:30.510078 | orchestrator | 00:01:30.509 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.510090 | orchestrator | 00:01:30.510 STDOUT terraform:  + size = 80 2025-07-28 00:01:30.510168 | orchestrator | 00:01:30.510 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.510178 | orchestrator | 00:01:30.510 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.510182 | orchestrator | 00:01:30.510 STDOUT terraform:  } 2025-07-28 00:01:30.510246 | orchestrator | 00:01:30.510 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-28 00:01:30.510279 | orchestrator | 00:01:30.510 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-28 00:01:30.510321 | orchestrator | 00:01:30.510 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.510334 | orchestrator | 00:01:30.510 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.510389 | orchestrator | 00:01:30.510 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.510427 | orchestrator | 00:01:30.510 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.510461 | orchestrator | 00:01:30.510 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.510523 | orchestrator | 00:01:30.510 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-28 00:01:30.510563 | orchestrator | 00:01:30.510 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.510575 | orchestrator | 00:01:30.510 STDOUT terraform:  + size = 80 2025-07-28 00:01:30.510606 | orchestrator | 00:01:30.510 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.510653 | orchestrator | 00:01:30.510 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.510659 | orchestrator | 00:01:30.510 STDOUT terraform:  } 2025-07-28 00:01:30.510756 | orchestrator | 00:01:30.510 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-28 00:01:30.510770 | orchestrator | 00:01:30.510 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-28 00:01:30.510828 | orchestrator | 00:01:30.510 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.510840 | orchestrator | 00:01:30.510 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.510892 | orchestrator | 00:01:30.510 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.510934 | orchestrator | 00:01:30.510 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.510987 | orchestrator | 00:01:30.510 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.511063 | orchestrator | 00:01:30.510 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-28 00:01:30.511070 | orchestrator | 00:01:30.511 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.511115 | orchestrator | 00:01:30.511 STDOUT terraform:  + size = 80 2025-07-28 00:01:30.511129 | orchestrator | 00:01:30.511 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.511172 | orchestrator | 00:01:30.511 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.511181 | orchestrator | 00:01:30.511 STDOUT terraform:  } 2025-07-28 00:01:30.511231 | orchestrator | 00:01:30.511 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-28 00:01:30.511291 | orchestrator | 00:01:30.511 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.511333 | orchestrator | 00:01:30.511 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.511345 | orchestrator | 00:01:30.511 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.511392 | orchestrator | 00:01:30.511 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.511438 | orchestrator | 00:01:30.511 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.511479 | orchestrator | 00:01:30.511 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-28 00:01:30.511531 | orchestrator | 00:01:30.511 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.511538 | orchestrator | 00:01:30.511 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.511620 | orchestrator | 00:01:30.511 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.511630 | orchestrator | 00:01:30.511 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.511634 | orchestrator | 00:01:30.511 STDOUT terraform:  } 2025-07-28 00:01:30.511669 | orchestrator | 00:01:30.511 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-28 00:01:30.511760 | orchestrator | 00:01:30.511 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.511811 | orchestrator | 00:01:30.511 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.511816 | orchestrator | 00:01:30.511 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.511870 | orchestrator | 00:01:30.511 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.511926 | orchestrator | 00:01:30.511 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.511955 | orchestrator | 00:01:30.511 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-28 00:01:30.512020 | orchestrator | 00:01:30.511 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.512032 | orchestrator | 00:01:30.511 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.512052 | orchestrator | 00:01:30.512 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.512089 | orchestrator | 00:01:30.512 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.512096 | orchestrator | 00:01:30.512 STDOUT terraform:  } 2025-07-28 00:01:30.512165 | orchestrator | 00:01:30.512 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-28 00:01:30.512210 | orchestrator | 00:01:30.512 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.512259 | orchestrator | 00:01:30.512 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.512275 | orchestrator | 00:01:30.512 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.512348 | orchestrator | 00:01:30.512 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.512360 | orchestrator | 00:01:30.512 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.512424 | orchestrator | 00:01:30.512 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-28 00:01:30.512476 | orchestrator | 00:01:30.512 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.512481 | orchestrator | 00:01:30.512 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.512515 | orchestrator | 00:01:30.512 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.512547 | orchestrator | 00:01:30.512 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.512552 | orchestrator | 00:01:30.512 STDOUT terraform:  } 2025-07-28 00:01:30.512611 | orchestrator | 00:01:30.512 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-28 00:01:30.512679 | orchestrator | 00:01:30.512 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.512722 | orchestrator | 00:01:30.512 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.512759 | orchestrator | 00:01:30.512 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.512809 | orchestrator | 00:01:30.512 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.512842 | orchestrator | 00:01:30.512 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.512892 | orchestrator | 00:01:30.512 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-28 00:01:30.512936 | orchestrator | 00:01:30.512 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.512944 | orchestrator | 00:01:30.512 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.512983 | orchestrator | 00:01:30.512 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.512997 | orchestrator | 00:01:30.512 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.513018 | orchestrator | 00:01:30.512 STDOUT terraform:  } 2025-07-28 00:01:30.513083 | orchestrator | 00:01:30.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-28 00:01:30.513153 | orchestrator | 00:01:30.513 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.513161 | orchestrator | 00:01:30.513 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.513276 | orchestrator | 00:01:30.513 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.513288 | orchestrator | 00:01:30.513 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.513295 | orchestrator | 00:01:30.513 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.513303 | orchestrator | 00:01:30.513 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-28 00:01:30.513388 | orchestrator | 00:01:30.513 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.513407 | orchestrator | 00:01:30.513 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.513416 | orchestrator | 00:01:30.513 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.513422 | orchestrator | 00:01:30.513 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.513429 | orchestrator | 00:01:30.513 STDOUT terraform:  } 2025-07-28 00:01:30.513513 | orchestrator | 00:01:30.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-28 00:01:30.513525 | orchestrator | 00:01:30.513 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.513594 | orchestrator | 00:01:30.513 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.513604 | orchestrator | 00:01:30.513 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.513620 | orchestrator | 00:01:30.513 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.513675 | orchestrator | 00:01:30.513 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.513732 | orchestrator | 00:01:30.513 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-28 00:01:30.513795 | orchestrator | 00:01:30.513 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.513819 | orchestrator | 00:01:30.513 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.513829 | orchestrator | 00:01:30.513 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.513835 | orchestrator | 00:01:30.513 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.513842 | orchestrator | 00:01:30.513 STDOUT terraform:  } 2025-07-28 00:01:30.513912 | orchestrator | 00:01:30.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-28 00:01:30.513961 | orchestrator | 00:01:30.513 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.513978 | orchestrator | 00:01:30.513 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.514010 | orchestrator | 00:01:30.513 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.514051 | orchestrator | 00:01:30.513 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.514121 | orchestrator | 00:01:30.514 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.514146 | orchestrator | 00:01:30.514 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-28 00:01:30.514193 | orchestrator | 00:01:30.514 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.514206 | orchestrator | 00:01:30.514 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.514216 | orchestrator | 00:01:30.514 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.514253 | orchestrator | 00:01:30.514 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.514276 | orchestrator | 00:01:30.514 STDOUT terraform:  } 2025-07-28 00:01:30.514317 | orchestrator | 00:01:30.514 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-28 00:01:30.514375 | orchestrator | 00:01:30.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.514400 | orchestrator | 00:01:30.514 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.514454 | orchestrator | 00:01:30.514 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.514465 | orchestrator | 00:01:30.514 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.514557 | orchestrator | 00:01:30.514 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.514571 | orchestrator | 00:01:30.514 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-28 00:01:30.514580 | orchestrator | 00:01:30.514 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.514612 | orchestrator | 00:01:30.514 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.514627 | orchestrator | 00:01:30.514 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.514667 | orchestrator | 00:01:30.514 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.514681 | orchestrator | 00:01:30.514 STDOUT terraform:  } 2025-07-28 00:01:30.514729 | orchestrator | 00:01:30.514 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-28 00:01:30.514837 | orchestrator | 00:01:30.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-28 00:01:30.514846 | orchestrator | 00:01:30.514 STDOUT terraform:  + attachment = (known after apply) 2025-07-28 00:01:30.514852 | orchestrator | 00:01:30.514 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.514867 | orchestrator | 00:01:30.514 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.514931 | orchestrator | 00:01:30.514 STDOUT terraform:  + metadata = (known after apply) 2025-07-28 00:01:30.514968 | orchestrator | 00:01:30.514 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-28 00:01:30.515015 | orchestrator | 00:01:30.514 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.515023 | orchestrator | 00:01:30.514 STDOUT terraform:  + size = 20 2025-07-28 00:01:30.515037 | orchestrator | 00:01:30.515 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-28 00:01:30.515081 | orchestrator | 00:01:30.515 STDOUT terraform:  + volume_type = "ssd" 2025-07-28 00:01:30.515088 | orchestrator | 00:01:30.515 STDOUT terraform:  } 2025-07-28 00:01:30.515142 | orchestrator | 00:01:30.515 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-28 00:01:30.515194 | orchestrator | 00:01:30.515 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-28 00:01:30.515209 | orchestrator | 00:01:30.515 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-28 00:01:30.515259 | orchestrator | 00:01:30.515 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-28 00:01:30.515305 | orchestrator | 00:01:30.515 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-28 00:01:30.515321 | orchestrator | 00:01:30.515 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.515371 | orchestrator | 00:01:30.515 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.515389 | orchestrator | 00:01:30.515 STDOUT terraform:  + config_drive = true 2025-07-28 00:01:30.515405 | orchestrator | 00:01:30.515 STDOUT terraform:  + created = (known after apply) 2025-07-28 00:01:30.515508 | orchestrator | 00:01:30.515 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-28 00:01:30.515524 | orchestrator | 00:01:30.515 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-28 00:01:30.515530 | orchestrator | 00:01:30.515 STDOUT terraform:  + force_delete = false 2025-07-28 00:01:30.515538 | orchestrator | 00:01:30.515 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-28 00:01:30.515573 | orchestrator | 00:01:30.515 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.515619 | orchestrator | 00:01:30.515 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.515637 | orchestrator | 00:01:30.515 STDOUT terraform:  + image_name = (known after apply) 2025-07-28 00:01:30.515708 | orchestrator | 00:01:30.515 STDOUT terraform:  + key_pair = "testbed" 2025-07-28 00:01:30.515758 | orchestrator | 00:01:30.515 STDOUT terraform:  + name = "testbed-manager" 2025-07-28 00:01:30.515784 | orchestrator | 00:01:30.515 STDOUT terraform:  + power_state = "active" 2025-07-28 00:01:30.515834 | orchestrator | 00:01:30.515 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.515874 | orchestrator | 00:01:30.515 STDOUT terraform:  + security_groups = (known after apply) 2025-07-28 00:01:30.515882 | orchestrator | 00:01:30.515 STDOUT terraform:  + stop_before_destroy = false 2025-07-28 00:01:30.515935 | orchestrator | 00:01:30.515 STDOUT terraform:  + updated = (known after apply) 2025-07-28 00:01:30.515953 | orchestrator | 00:01:30.515 STDOUT terraform:  + user_data = (sensitive value) 2025-07-28 00:01:30.515963 | orchestrator | 00:01:30.515 STDOUT terraform:  + block_device { 2025-07-28 00:01:30.516018 | orchestrator | 00:01:30.515 STDOUT terraform:  + boot_index = 0 2025-07-28 00:01:30.516027 | orchestrator | 00:01:30.515 STDOUT terraform:  + delete_on_termination = false 2025-07-28 00:01:30.516061 | orchestrator | 00:01:30.516 STDOUT terraform:  + destination_type = "volume" 2025-07-28 00:01:30.516102 | orchestrator | 00:01:30.516 STDOUT terraform:  + multiattach = false 2025-07-28 00:01:30.516183 | orchestrator | 00:01:30.516 STDOUT terraform:  + source_type = "volume" 2025-07-28 00:01:30.516196 | orchestrator | 00:01:30.516 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.516200 | orchestrator | 00:01:30.516 STDOUT terraform:  } 2025-07-28 00:01:30.516207 | orchestrator | 00:01:30.516 STDOUT terraform:  + network { 2025-07-28 00:01:30.516211 | orchestrator | 00:01:30.516 STDOUT terraform:  + access_network = false 2025-07-28 00:01:30.516264 | orchestrator | 00:01:30.516 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-28 00:01:30.516271 | orchestrator | 00:01:30.516 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-28 00:01:30.516314 | orchestrator | 00:01:30.516 STDOUT terraform:  + mac = (known after apply) 2025-07-28 00:01:30.516380 | orchestrator | 00:01:30.516 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.516386 | orchestrator | 00:01:30.516 STDOUT terraform:  + port = (known after apply) 2025-07-28 00:01:30.516398 | orchestrator | 00:01:30.516 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.516421 | orchestrator | 00:01:30.516 STDOUT terraform:  } 2025-07-28 00:01:30.516428 | orchestrator | 00:01:30.516 STDOUT terraform:  } 2025-07-28 00:01:30.516486 | orchestrator | 00:01:30.516 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-28 00:01:30.516528 | orchestrator | 00:01:30.516 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-28 00:01:30.516575 | orchestrator | 00:01:30.516 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-28 00:01:30.516627 | orchestrator | 00:01:30.516 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-28 00:01:30.516635 | orchestrator | 00:01:30.516 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-28 00:01:30.516706 | orchestrator | 00:01:30.516 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.516712 | orchestrator | 00:01:30.516 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.516719 | orchestrator | 00:01:30.516 STDOUT terraform:  + config_drive = true 2025-07-28 00:01:30.516759 | orchestrator | 00:01:30.516 STDOUT terraform:  + created = (known after apply) 2025-07-28 00:01:30.516819 | orchestrator | 00:01:30.516 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-28 00:01:30.516830 | orchestrator | 00:01:30.516 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-28 00:01:30.516836 | orchestrator | 00:01:30.516 STDOUT terraform:  + force_delete = false 2025-07-28 00:01:30.516912 | orchestrator | 00:01:30.516 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-28 00:01:30.516923 | orchestrator | 00:01:30.516 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.516970 | orchestrator | 00:01:30.516 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.516978 | orchestrator | 00:01:30.516 STDOUT terraform:  + image_name = (known after apply) 2025-07-28 00:01:30.517042 | orchestrator | 00:01:30.516 STDOUT terraform:  + key_pair = "testbed" 2025-07-28 00:01:30.517053 | orchestrator | 00:01:30.516 STDOUT terraform:  + name = "testbed-node-0" 2025-07-28 00:01:30.517059 | orchestrator | 00:01:30.517 STDOUT terraform:  + power_state = "active" 2025-07-28 00:01:30.517111 | orchestrator | 00:01:30.517 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.517141 | orchestrator | 00:01:30.517 STDOUT terraform:  + security_groups = (known after apply) 2025-07-28 00:01:30.517148 | orchestrator | 00:01:30.517 STDOUT terraform:  + stop_before_destroy = false 2025-07-28 00:01:30.517185 | orchestrator | 00:01:30.517 STDOUT terraform:  + updated = (known after apply) 2025-07-28 00:01:30.517272 | orchestrator | 00:01:30.517 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-28 00:01:30.517278 | orchestrator | 00:01:30.517 STDOUT terraform:  + block_device { 2025-07-28 00:01:30.517290 | orchestrator | 00:01:30.517 STDOUT terraform:  + boot_index = 0 2025-07-28 00:01:30.517310 | orchestrator | 00:01:30.517 STDOUT terraform:  + delete_on_termination = false 2025-07-28 00:01:30.517353 | orchestrator | 00:01:30.517 STDOUT terraform:  + destination_type = "volume" 2025-07-28 00:01:30.517364 | orchestrator | 00:01:30.517 STDOUT terraform:  + multiattach = false 2025-07-28 00:01:30.517411 | orchestrator | 00:01:30.517 STDOUT terraform:  + source_type = "volume" 2025-07-28 00:01:30.517439 | orchestrator | 00:01:30.517 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.517446 | orchestrator | 00:01:30.517 STDOUT terraform:  } 2025-07-28 00:01:30.517457 | orchestrator | 00:01:30.517 STDOUT terraform:  + network { 2025-07-28 00:01:30.517489 | orchestrator | 00:01:30.517 STDOUT terraform:  + access_network = false 2025-07-28 00:01:30.517556 | orchestrator | 00:01:30.517 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-28 00:01:30.517562 | orchestrator | 00:01:30.517 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-28 00:01:30.517574 | orchestrator | 00:01:30.517 STDOUT terraform:  + mac = (known after apply) 2025-07-28 00:01:30.517601 | orchestrator | 00:01:30.517 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.517647 | orchestrator | 00:01:30.517 STDOUT terraform:  + port = (known after apply) 2025-07-28 00:01:30.517661 | orchestrator | 00:01:30.517 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.517745 | orchestrator | 00:01:30.517 STDOUT terraform:  } 2025-07-28 00:01:30.517752 | orchestrator | 00:01:30.517 STDOUT terraform:  } 2025-07-28 00:01:30.517756 | orchestrator | 00:01:30.517 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-28 00:01:30.517808 | orchestrator | 00:01:30.517 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-28 00:01:30.517820 | orchestrator | 00:01:30.517 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-28 00:01:30.517865 | orchestrator | 00:01:30.517 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-28 00:01:30.517894 | orchestrator | 00:01:30.517 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-28 00:01:30.517952 | orchestrator | 00:01:30.517 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.517959 | orchestrator | 00:01:30.517 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.517989 | orchestrator | 00:01:30.517 STDOUT terraform:  + config_drive = true 2025-07-28 00:01:30.518057 | orchestrator | 00:01:30.517 STDOUT terraform:  + created = (known after apply) 2025-07-28 00:01:30.518073 | orchestrator | 00:01:30.518 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-28 00:01:30.518131 | orchestrator | 00:01:30.518 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-28 00:01:30.518140 | orchestrator | 00:01:30.518 STDOUT terraform:  + force_delete = false 2025-07-28 00:01:30.518157 | orchestrator | 00:01:30.518 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-28 00:01:30.518196 | orchestrator | 00:01:30.518 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.518239 | orchestrator | 00:01:30.518 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.518274 | orchestrator | 00:01:30.518 STDOUT terraform:  + image_name = (known after apply) 2025-07-28 00:01:30.518286 | orchestrator | 00:01:30.518 STDOUT terraform:  + key_pair = "testbed" 2025-07-28 00:01:30.518335 | orchestrator | 00:01:30.518 STDOUT terraform:  + name = "testbed-node-1" 2025-07-28 00:01:30.518342 | orchestrator | 00:01:30.518 STDOUT terraform:  + power_state = "active" 2025-07-28 00:01:30.518385 | orchestrator | 00:01:30.518 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.518432 | orchestrator | 00:01:30.518 STDOUT terraform:  + security_groups = (known after apply) 2025-07-28 00:01:30.518438 | orchestrator | 00:01:30.518 STDOUT terraform:  + stop_before_destroy = false 2025-07-28 00:01:30.518477 | orchestrator | 00:01:30.518 STDOUT terraform:  + updated = (known after apply) 2025-07-28 00:01:30.518536 | orchestrator | 00:01:30.518 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-28 00:01:30.518541 | orchestrator | 00:01:30.518 STDOUT terraform:  + block_device { 2025-07-28 00:01:30.518573 | orchestrator | 00:01:30.518 STDOUT terraform:  + boot_index = 0 2025-07-28 00:01:30.518602 | orchestrator | 00:01:30.518 STDOUT terraform:  + delete_on_termination = false 2025-07-28 00:01:30.518632 | orchestrator | 00:01:30.518 STDOUT terraform:  + destination_type = "volume" 2025-07-28 00:01:30.518644 | orchestrator | 00:01:30.518 STDOUT terraform:  + multiattach = false 2025-07-28 00:01:30.518728 | orchestrator | 00:01:30.518 STDOUT terraform:  + source_type = "volume" 2025-07-28 00:01:30.518752 | orchestrator | 00:01:30.518 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.518759 | orchestrator | 00:01:30.518 STDOUT terraform:  } 2025-07-28 00:01:30.518803 | orchestrator | 00:01:30.518 STDOUT terraform:  + network { 2025-07-28 00:01:30.518814 | orchestrator | 00:01:30.518 STDOUT terraform:  + access_network = false 2025-07-28 00:01:30.518819 | orchestrator | 00:01:30.518 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-28 00:01:30.518860 | orchestrator | 00:01:30.518 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-28 00:01:30.518923 | orchestrator | 00:01:30.518 STDOUT terraform:  + mac = (known after apply) 2025-07-28 00:01:30.518930 | orchestrator | 00:01:30.518 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.518974 | orchestrator | 00:01:30.518 STDOUT terraform:  + port = (known after apply) 2025-07-28 00:01:30.519019 | orchestrator | 00:01:30.518 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.519028 | orchestrator | 00:01:30.518 STDOUT terraform:  } 2025-07-28 00:01:30.519032 | orchestrator | 00:01:30.518 STDOUT terraform:  } 2025-07-28 00:01:30.519058 | orchestrator | 00:01:30.519 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-28 00:01:30.519099 | orchestrator | 00:01:30.519 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-28 00:01:30.519125 | orchestrator | 00:01:30.519 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-28 00:01:30.519162 | orchestrator | 00:01:30.519 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-28 00:01:30.519200 | orchestrator | 00:01:30.519 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-28 00:01:30.519251 | orchestrator | 00:01:30.519 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.519255 | orchestrator | 00:01:30.519 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.519259 | orchestrator | 00:01:30.519 STDOUT terraform:  + config_drive = true 2025-07-28 00:01:30.519298 | orchestrator | 00:01:30.519 STDOUT terraform:  + created = (known after apply) 2025-07-28 00:01:30.519316 | orchestrator | 00:01:30.519 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-28 00:01:30.519363 | orchestrator | 00:01:30.519 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-28 00:01:30.519368 | orchestrator | 00:01:30.519 STDOUT terraform:  + force_delete = false 2025-07-28 00:01:30.519408 | orchestrator | 00:01:30.519 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-28 00:01:30.519459 | orchestrator | 00:01:30.519 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.519464 | orchestrator | 00:01:30.519 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.519515 | orchestrator | 00:01:30.519 STDOUT terraform:  + image_name = (known after apply) 2025-07-28 00:01:30.519521 | orchestrator | 00:01:30.519 STDOUT terraform:  + key_pair = "testbed" 2025-07-28 00:01:30.519547 | orchestrator | 00:01:30.519 STDOUT terraform:  + name = "testbed-node-2" 2025-07-28 00:01:30.519614 | orchestrator | 00:01:30.519 STDOUT terraform:  + power_state = "active" 2025-07-28 00:01:30.519625 | orchestrator | 00:01:30.519 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.519629 | orchestrator | 00:01:30.519 STDOUT terraform:  + security_groups = (known after apply) 2025-07-28 00:01:30.519680 | orchestrator | 00:01:30.519 STDOUT terraform:  + stop_before_destroy = false 2025-07-28 00:01:30.519697 | orchestrator | 00:01:30.519 STDOUT terraform:  + updated = (known after apply) 2025-07-28 00:01:30.519797 | orchestrator | 00:01:30.519 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-28 00:01:30.519807 | orchestrator | 00:01:30.519 STDOUT terraform:  + block_device { 2025-07-28 00:01:30.519811 | orchestrator | 00:01:30.519 STDOUT terraform:  + boot_index = 0 2025-07-28 00:01:30.519867 | orchestrator | 00:01:30.519 STDOUT terraform:  + delete_on_termination = false 2025-07-28 00:01:30.519880 | orchestrator | 00:01:30.519 STDOUT terraform:  + destination_type = "volume" 2025-07-28 00:01:30.519886 | orchestrator | 00:01:30.519 STDOUT terraform:  + multiattach = false 2025-07-28 00:01:30.519914 | orchestrator | 00:01:30.519 STDOUT terraform:  + source_type = "volume" 2025-07-28 00:01:30.519952 | orchestrator | 00:01:30.519 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.519968 | orchestrator | 00:01:30.519 STDOUT terraform:  } 2025-07-28 00:01:30.519972 | orchestrator | 00:01:30.519 STDOUT terraform:  + network { 2025-07-28 00:01:30.519977 | orchestrator | 00:01:30.519 STDOUT terraform:  + access_network = false 2025-07-28 00:01:30.520036 | orchestrator | 00:01:30.519 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-28 00:01:30.520045 | orchestrator | 00:01:30.520 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-28 00:01:30.520100 | orchestrator | 00:01:30.520 STDOUT terraform:  + mac = (known after apply) 2025-07-28 00:01:30.520106 | orchestrator | 00:01:30.520 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.520147 | orchestrator | 00:01:30.520 STDOUT terraform:  + port = (known after apply) 2025-07-28 00:01:30.520154 | orchestrator | 00:01:30.520 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.520189 | orchestrator | 00:01:30.520 STDOUT terraform:  } 2025-07-28 00:01:30.520200 | orchestrator | 00:01:30.520 STDOUT terraform:  } 2025-07-28 00:01:30.520222 | orchestrator | 00:01:30.520 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-28 00:01:30.520291 | orchestrator | 00:01:30.520 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-28 00:01:30.520296 | orchestrator | 00:01:30.520 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-28 00:01:30.520330 | orchestrator | 00:01:30.520 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-28 00:01:30.520402 | orchestrator | 00:01:30.520 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-28 00:01:30.520414 | orchestrator | 00:01:30.520 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.520419 | orchestrator | 00:01:30.520 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.520482 | orchestrator | 00:01:30.520 STDOUT terraform:  + config_drive = true 2025-07-28 00:01:30.520493 | orchestrator | 00:01:30.520 STDOUT terraform:  + created = (known after apply) 2025-07-28 00:01:30.520499 | orchestrator | 00:01:30.520 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-28 00:01:30.520570 | orchestrator | 00:01:30.520 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-28 00:01:30.520575 | orchestrator | 00:01:30.520 STDOUT terraform:  + force_delete = false 2025-07-28 00:01:30.520583 | orchestrator | 00:01:30.520 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-28 00:01:30.520612 | orchestrator | 00:01:30.520 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.520643 | orchestrator | 00:01:30.520 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.520716 | orchestrator | 00:01:30.520 STDOUT terraform:  + image_name = (known after apply) 2025-07-28 00:01:30.520729 | orchestrator | 00:01:30.520 STDOUT terraform:  + key_pair = "testbed" 2025-07-28 00:01:30.520765 | orchestrator | 00:01:30.520 STDOUT terraform:  + name = "testbed-node-3" 2025-07-28 00:01:30.520816 | orchestrator | 00:01:30.520 STDOUT terraform:  + power_state = "active" 2025-07-28 00:01:30.520830 | orchestrator | 00:01:30.520 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.520836 | orchestrator | 00:01:30.520 STDOUT terraform:  + security_groups = (known after apply) 2025-07-28 00:01:30.520886 | orchestrator | 00:01:30.520 STDOUT terraform:  + stop_before_destroy = false 2025-07-28 00:01:30.520893 | orchestrator | 00:01:30.520 STDOUT terraform:  + updated = (known after apply) 2025-07-28 00:01:30.520964 | orchestrator | 00:01:30.520 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-28 00:01:30.520970 | orchestrator | 00:01:30.520 STDOUT terraform:  + block_device { 2025-07-28 00:01:30.520995 | orchestrator | 00:01:30.520 STDOUT terraform:  + boot_index = 0 2025-07-28 00:01:30.521007 | orchestrator | 00:01:30.520 STDOUT terraform:  + delete_on_termination = false 2025-07-28 00:01:30.521042 | orchestrator | 00:01:30.521 STDOUT terraform:  + destination_type = "volume" 2025-07-28 00:01:30.521068 | orchestrator | 00:01:30.521 STDOUT terraform:  + multiattach = false 2025-07-28 00:01:30.521109 | orchestrator | 00:01:30.521 STDOUT terraform:  + source_type = "volume" 2025-07-28 00:01:30.521147 | orchestrator | 00:01:30.521 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.521152 | orchestrator | 00:01:30.521 STDOUT terraform:  } 2025-07-28 00:01:30.521157 | orchestrator | 00:01:30.521 STDOUT terraform:  + network { 2025-07-28 00:01:30.521186 | orchestrator | 00:01:30.521 STDOUT terraform:  + access_network = false 2025-07-28 00:01:30.521192 | orchestrator | 00:01:30.521 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-28 00:01:30.521257 | orchestrator | 00:01:30.521 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-28 00:01:30.521267 | orchestrator | 00:01:30.521 STDOUT terraform:  + mac = (known after apply) 2025-07-28 00:01:30.521274 | orchestrator | 00:01:30.521 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.521319 | orchestrator | 00:01:30.521 STDOUT terraform:  + port = (known after apply) 2025-07-28 00:01:30.521355 | orchestrator | 00:01:30.521 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.521364 | orchestrator | 00:01:30.521 STDOUT terraform:  } 2025-07-28 00:01:30.521370 | orchestrator | 00:01:30.521 STDOUT terraform:  } 2025-07-28 00:01:30.521423 | orchestrator | 00:01:30.521 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-28 00:01:30.521497 | orchestrator | 00:01:30.521 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-28 00:01:30.521503 | orchestrator | 00:01:30.521 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-28 00:01:30.521508 | orchestrator | 00:01:30.521 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-28 00:01:30.521569 | orchestrator | 00:01:30.521 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-28 00:01:30.521579 | orchestrator | 00:01:30.521 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.521603 | orchestrator | 00:01:30.521 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.521615 | orchestrator | 00:01:30.521 STDOUT terraform:  + config_drive = true 2025-07-28 00:01:30.521713 | orchestrator | 00:01:30.521 STDOUT terraform:  + created = (known after apply) 2025-07-28 00:01:30.521719 | orchestrator | 00:01:30.521 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-28 00:01:30.521726 | orchestrator | 00:01:30.521 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-28 00:01:30.521777 | orchestrator | 00:01:30.521 STDOUT terraform:  + force_delete = false 2025-07-28 00:01:30.521783 | orchestrator | 00:01:30.521 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-28 00:01:30.521817 | orchestrator | 00:01:30.521 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.521852 | orchestrator | 00:01:30.521 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.521904 | orchestrator | 00:01:30.521 STDOUT terraform:  + image_name = (known after apply) 2025-07-28 00:01:30.521909 | orchestrator | 00:01:30.521 STDOUT terraform:  + key_pair = "testbed" 2025-07-28 00:01:30.521979 | orchestrator | 00:01:30.521 STDOUT terraform:  + name = "testbed-node-4" 2025-07-28 00:01:30.521990 | orchestrator | 00:01:30.521 STDOUT terraform:  + power_state = "active" 2025-07-28 00:01:30.521994 | orchestrator | 00:01:30.521 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.522030 | orchestrator | 00:01:30.521 STDOUT terraform:  + security_groups = (known after apply) 2025-07-28 00:01:30.522054 | orchestrator | 00:01:30.522 STDOUT terraform:  + stop_before_destroy = false 2025-07-28 00:01:30.522094 | orchestrator | 00:01:30.522 STDOUT terraform:  + updated = (known after apply) 2025-07-28 00:01:30.522148 | orchestrator | 00:01:30.522 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-28 00:01:30.522156 | orchestrator | 00:01:30.522 STDOUT terraform:  + block_device { 2025-07-28 00:01:30.522193 | orchestrator | 00:01:30.522 STDOUT terraform:  + boot_index = 0 2025-07-28 00:01:30.522200 | orchestrator | 00:01:30.522 STDOUT terraform:  + delete_on_termination = false 2025-07-28 00:01:30.522236 | orchestrator | 00:01:30.522 STDOUT terraform:  + destination_type = "volume" 2025-07-28 00:01:30.522290 | orchestrator | 00:01:30.522 STDOUT terraform:  + multiattach = false 2025-07-28 00:01:30.522338 | orchestrator | 00:01:30.522 STDOUT terraform:  + source_type = "volume" 2025-07-28 00:01:30.522379 | orchestrator | 00:01:30.522 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.522384 | orchestrator | 00:01:30.522 STDOUT terraform:  } 2025-07-28 00:01:30.522388 | orchestrator | 00:01:30.522 STDOUT terraform:  + network { 2025-07-28 00:01:30.522420 | orchestrator | 00:01:30.522 STDOUT terraform:  + access_network = false 2025-07-28 00:01:30.522436 | orchestrator | 00:01:30.522 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-28 00:01:30.522450 | orchestrator | 00:01:30.522 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-28 00:01:30.522468 | orchestrator | 00:01:30.522 STDOUT terraform:  + mac = (known after apply) 2025-07-28 00:01:30.522476 | orchestrator | 00:01:30.522 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.522482 | orchestrator | 00:01:30.522 STDOUT terraform:  + port = (known after apply) 2025-07-28 00:01:30.522520 | orchestrator | 00:01:30.522 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.522527 | orchestrator | 00:01:30.522 STDOUT terraform:  } 2025-07-28 00:01:30.522533 | orchestrator | 00:01:30.522 STDOUT terraform:  } 2025-07-28 00:01:30.522590 | orchestrator | 00:01:30.522 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-28 00:01:30.522633 | orchestrator | 00:01:30.522 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-28 00:01:30.522659 | orchestrator | 00:01:30.522 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-28 00:01:30.522696 | orchestrator | 00:01:30.522 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-28 00:01:30.522737 | orchestrator | 00:01:30.522 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-28 00:01:30.522786 | orchestrator | 00:01:30.522 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.522820 | orchestrator | 00:01:30.522 STDOUT terraform:  + availability_zone = "nova" 2025-07-28 00:01:30.522853 | orchestrator | 00:01:30.522 STDOUT terraform:  + config_drive = true 2025-07-28 00:01:30.522878 | orchestrator | 00:01:30.522 STDOUT terraform:  + created = (known after apply) 2025-07-28 00:01:30.522882 | orchestrator | 00:01:30.522 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-28 00:01:30.522902 | orchestrator | 00:01:30.522 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-28 00:01:30.522906 | orchestrator | 00:01:30.522 STDOUT terraform:  + force_delete = false 2025-07-28 00:01:30.522925 | orchestrator | 00:01:30.522 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-28 00:01:30.522992 | orchestrator | 00:01:30.522 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.522999 | orchestrator | 00:01:30.522 STDOUT terraform:  + image_id = (known after apply) 2025-07-28 00:01:30.523045 | orchestrator | 00:01:30.522 STDOUT terraform:  + image_name = (known after apply) 2025-07-28 00:01:30.523114 | orchestrator | 00:01:30.523 STDOUT terraform:  + key_pair = "testbed" 2025-07-28 00:01:30.523121 | orchestrator | 00:01:30.523 STDOUT terraform:  + name = "testbed-node-5" 2025-07-28 00:01:30.523151 | orchestrator | 00:01:30.523 STDOUT terraform:  + power_state = "active" 2025-07-28 00:01:30.523155 | orchestrator | 00:01:30.523 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.523178 | orchestrator | 00:01:30.523 STDOUT terraform:  + security_groups = (known after apply) 2025-07-28 00:01:30.523185 | orchestrator | 00:01:30.523 STDOUT terraform:  + stop_before_destroy = false 2025-07-28 00:01:30.523254 | orchestrator | 00:01:30.523 STDOUT terraform:  + updated = (known after apply) 2025-07-28 00:01:30.523263 | orchestrator | 00:01:30.523 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-28 00:01:30.523272 | orchestrator | 00:01:30.523 STDOUT terraform:  + block_device { 2025-07-28 00:01:30.523311 | orchestrator | 00:01:30.523 STDOUT terraform:  + boot_index = 0 2025-07-28 00:01:30.523318 | orchestrator | 00:01:30.523 STDOUT terraform:  + delete_on_termination = false 2025-07-28 00:01:30.523364 | orchestrator | 00:01:30.523 STDOUT terraform:  + destination_type = "volume" 2025-07-28 00:01:30.523390 | orchestrator | 00:01:30.523 STDOUT terraform:  + multiattach = false 2025-07-28 00:01:30.523395 | orchestrator | 00:01:30.523 STDOUT terraform:  + source_type = "volume" 2025-07-28 00:01:30.523434 | orchestrator | 00:01:30.523 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.523455 | orchestrator | 00:01:30.523 STDOUT terraform:  } 2025-07-28 00:01:30.523461 | orchestrator | 00:01:30.523 STDOUT terraform:  + network { 2025-07-28 00:01:30.523465 | orchestrator | 00:01:30.523 STDOUT terraform:  + access_network = false 2025-07-28 00:01:30.523508 | orchestrator | 00:01:30.523 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-28 00:01:30.523526 | orchestrator | 00:01:30.523 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-28 00:01:30.523593 | orchestrator | 00:01:30.523 STDOUT terraform:  + mac = (known after apply) 2025-07-28 00:01:30.523625 | orchestrator | 00:01:30.523 STDOUT terraform:  + name = (known after apply) 2025-07-28 00:01:30.523634 | orchestrator | 00:01:30.523 STDOUT terraform:  + port = (known after apply) 2025-07-28 00:01:30.523655 | orchestrator | 00:01:30.523 STDOUT terraform:  + uuid = (known after apply) 2025-07-28 00:01:30.523753 | orchestrator | 00:01:30.523 STDOUT terraform:  } 2025-07-28 00:01:30.523759 | orchestrator | 00:01:30.523 STDOUT terraform:  } 2025-07-28 00:01:30.523766 | orchestrator | 00:01:30.523 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-28 00:01:30.523769 | orchestrator | 00:01:30.523 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-28 00:01:30.523773 | orchestrator | 00:01:30.523 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-28 00:01:30.523805 | orchestrator | 00:01:30.523 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.523822 | orchestrator | 00:01:30.523 STDOUT terraform:  + name = "testbed" 2025-07-28 00:01:30.523838 | orchestrator | 00:01:30.523 STDOUT terraform:  + private_key = (sensitive value) 2025-07-28 00:01:30.523844 | orchestrator | 00:01:30.523 STDOUT terraform:  + public_key = (known after apply) 2025-07-28 00:01:30.523847 | orchestrator | 00:01:30.523 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.523901 | orchestrator | 00:01:30.523 STDOUT terraform:  + user_id = (known after apply) 2025-07-28 00:01:30.523944 | orchestrator | 00:01:30.523 STDOUT terraform:  } 2025-07-28 00:01:30.523962 | orchestrator | 00:01:30.523 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-28 00:01:30.523984 | orchestrator | 00:01:30.523 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.524030 | orchestrator | 00:01:30.523 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.524035 | orchestrator | 00:01:30.523 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.524049 | orchestrator | 00:01:30.524 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.524125 | orchestrator | 00:01:30.524 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.524131 | orchestrator | 00:01:30.524 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.524134 | orchestrator | 00:01:30.524 STDOUT terraform:  } 2025-07-28 00:01:30.524171 | orchestrator | 00:01:30.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-28 00:01:30.524207 | orchestrator | 00:01:30.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.524317 | orchestrator | 00:01:30.524 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.524402 | orchestrator | 00:01:30.524 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.524409 | orchestrator | 00:01:30.524 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.524413 | orchestrator | 00:01:30.524 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.524486 | orchestrator | 00:01:30.524 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.524529 | orchestrator | 00:01:30.524 STDOUT terraform:  } 2025-07-28 00:01:30.524554 | orchestrator | 00:01:30.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-28 00:01:30.524559 | orchestrator | 00:01:30.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.524563 | orchestrator | 00:01:30.524 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.524582 | orchestrator | 00:01:30.524 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.524586 | orchestrator | 00:01:30.524 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.524592 | orchestrator | 00:01:30.524 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.524596 | orchestrator | 00:01:30.524 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.524600 | orchestrator | 00:01:30.524 STDOUT terraform:  } 2025-07-28 00:01:30.524661 | orchestrator | 00:01:30.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-28 00:01:30.524667 | orchestrator | 00:01:30.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.524706 | orchestrator | 00:01:30.524 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.524710 | orchestrator | 00:01:30.524 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.524716 | orchestrator | 00:01:30.524 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.524754 | orchestrator | 00:01:30.524 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.524774 | orchestrator | 00:01:30.524 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.524780 | orchestrator | 00:01:30.524 STDOUT terraform:  } 2025-07-28 00:01:30.524834 | orchestrator | 00:01:30.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-28 00:01:30.524881 | orchestrator | 00:01:30.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.524901 | orchestrator | 00:01:30.524 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.524934 | orchestrator | 00:01:30.524 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.525008 | orchestrator | 00:01:30.524 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.525041 | orchestrator | 00:01:30.524 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.525057 | orchestrator | 00:01:30.524 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.525061 | orchestrator | 00:01:30.524 STDOUT terraform:  } 2025-07-28 00:01:30.525079 | orchestrator | 00:01:30.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-28 00:01:30.525137 | orchestrator | 00:01:30.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.525143 | orchestrator | 00:01:30.525 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.525174 | orchestrator | 00:01:30.525 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.525179 | orchestrator | 00:01:30.525 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.525215 | orchestrator | 00:01:30.525 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.525220 | orchestrator | 00:01:30.525 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.525226 | orchestrator | 00:01:30.525 STDOUT terraform:  } 2025-07-28 00:01:30.525277 | orchestrator | 00:01:30.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-28 00:01:30.525353 | orchestrator | 00:01:30.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.525387 | orchestrator | 00:01:30.525 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.525393 | orchestrator | 00:01:30.525 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.525397 | orchestrator | 00:01:30.525 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.525420 | orchestrator | 00:01:30.525 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.525427 | orchestrator | 00:01:30.525 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.525467 | orchestrator | 00:01:30.525 STDOUT terraform:  } 2025-07-28 00:01:30.525510 | orchestrator | 00:01:30.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-28 00:01:30.525548 | orchestrator | 00:01:30.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.525565 | orchestrator | 00:01:30.525 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.525583 | orchestrator | 00:01:30.525 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.525629 | orchestrator | 00:01:30.525 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.525635 | orchestrator | 00:01:30.525 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.525659 | orchestrator | 00:01:30.525 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.525663 | orchestrator | 00:01:30.525 STDOUT terraform:  } 2025-07-28 00:01:30.525775 | orchestrator | 00:01:30.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-28 00:01:30.525808 | orchestrator | 00:01:30.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-28 00:01:30.525817 | orchestrator | 00:01:30.525 STDOUT terraform:  + device = (known after apply) 2025-07-28 00:01:30.525835 | orchestrator | 00:01:30.525 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.525841 | orchestrator | 00:01:30.525 STDOUT terraform:  + instance_id = (known after apply) 2025-07-28 00:01:30.525867 | orchestrator | 00:01:30.525 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.525896 | orchestrator | 00:01:30.525 STDOUT terraform:  + volume_id = (known after apply) 2025-07-28 00:01:30.525903 | orchestrator | 00:01:30.525 STDOUT terraform:  } 2025-07-28 00:01:30.526005 | orchestrator | 00:01:30.525 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-28 00:01:30.526069 | orchestrator | 00:01:30.525 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-28 00:01:30.526122 | orchestrator | 00:01:30.525 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-28 00:01:30.526127 | orchestrator | 00:01:30.526 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-28 00:01:30.526138 | orchestrator | 00:01:30.526 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.526142 | orchestrator | 00:01:30.526 STDOUT terraform:  + port_id = (known after apply) 2025-07-28 00:01:30.526146 | orchestrator | 00:01:30.526 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.526151 | orchestrator | 00:01:30.526 STDOUT terraform:  } 2025-07-28 00:01:30.526231 | orchestrator | 00:01:30.526 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-28 00:01:30.526239 | orchestrator | 00:01:30.526 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-28 00:01:30.526300 | orchestrator | 00:01:30.526 STDOUT terraform:  + address = (known after apply) 2025-07-28 00:01:30.526311 | orchestrator | 00:01:30.526 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.526315 | orchestrator | 00:01:30.526 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-28 00:01:30.526320 | orchestrator | 00:01:30.526 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.526348 | orchestrator | 00:01:30.526 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-28 00:01:30.526432 | orchestrator | 00:01:30.526 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.526442 | orchestrator | 00:01:30.526 STDOUT terraform:  + pool = "public" 2025-07-28 00:01:30.526446 | orchestrator | 00:01:30.526 STDOUT terraform:  + port_id = (known after apply) 2025-07-28 00:01:30.526496 | orchestrator | 00:01:30.526 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.526508 | orchestrator | 00:01:30.526 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.526546 | orchestrator | 00:01:30.526 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.526551 | orchestrator | 00:01:30.526 STDOUT terraform:  } 2025-07-28 00:01:30.526607 | orchestrator | 00:01:30.526 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-28 00:01:30.526643 | orchestrator | 00:01:30.526 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-28 00:01:30.526700 | orchestrator | 00:01:30.526 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.526720 | orchestrator | 00:01:30.526 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.526776 | orchestrator | 00:01:30.526 STDOUT terraform:  + availability_zone_hints = [ 2025-07-28 00:01:30.526782 | orchestrator | 00:01:30.526 STDOUT terraform:  + "nova", 2025-07-28 00:01:30.526791 | orchestrator | 00:01:30.526 STDOUT terraform:  ] 2025-07-28 00:01:30.526796 | orchestrator | 00:01:30.526 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-28 00:01:30.526830 | orchestrator | 00:01:30.526 STDOUT terraform:  + external = (known after apply) 2025-07-28 00:01:30.526897 | orchestrator | 00:01:30.526 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.526907 | orchestrator | 00:01:30.526 STDOUT terraform:  + mtu = (known after apply) 2025-07-28 00:01:30.526958 | orchestrator | 00:01:30.526 STDOUT terraform:  + name = "net-testbed-management" 2025-07-28 00:01:30.526970 | orchestrator | 00:01:30.526 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.527013 | orchestrator | 00:01:30.526 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.527078 | orchestrator | 00:01:30.527 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.527091 | orchestrator | 00:01:30.527 STDOUT terraform:  + shared = (known after apply) 2025-07-28 00:01:30.527143 | orchestrator | 00:01:30.527 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.527151 | orchestrator | 00:01:30.527 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-28 00:01:30.527221 | orchestrator | 00:01:30.527 STDOUT terraform:  + segments (known after apply) 2025-07-28 00:01:30.527226 | orchestrator | 00:01:30.527 STDOUT terraform:  } 2025-07-28 00:01:30.527236 | orchestrator | 00:01:30.527 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-28 00:01:30.527278 | orchestrator | 00:01:30.527 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-28 00:01:30.527314 | orchestrator | 00:01:30.527 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.527355 | orchestrator | 00:01:30.527 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-28 00:01:30.527373 | orchestrator | 00:01:30.527 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-28 00:01:30.527410 | orchestrator | 00:01:30.527 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.527479 | orchestrator | 00:01:30.527 STDOUT terraform:  + device_id = (known after apply) 2025-07-28 00:01:30.527493 | orchestrator | 00:01:30.527 STDOUT terraform:  + device_owner = (known after apply) 2025-07-28 00:01:30.527499 | orchestrator | 00:01:30.527 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-28 00:01:30.527539 | orchestrator | 00:01:30.527 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.527580 | orchestrator | 00:01:30.527 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.527639 | orchestrator | 00:01:30.527 STDOUT terraform:  + mac_address = (known after apply) 2025-07-28 00:01:30.527645 | orchestrator | 00:01:30.527 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.527700 | orchestrator | 00:01:30.527 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.527754 | orchestrator | 00:01:30.527 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.527780 | orchestrator | 00:01:30.527 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.527837 | orchestrator | 00:01:30.527 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-28 00:01:30.527848 | orchestrator | 00:01:30.527 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.527854 | orchestrator | 00:01:30.527 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.527887 | orchestrator | 00:01:30.527 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-28 00:01:30.527904 | orchestrator | 00:01:30.527 STDOUT terraform:  } 2025-07-28 00:01:30.527908 | orchestrator | 00:01:30.527 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.527940 | orchestrator | 00:01:30.527 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-28 00:01:30.527951 | orchestrator | 00:01:30.527 STDOUT terraform:  } 2025-07-28 00:01:30.528024 | orchestrator | 00:01:30.527 STDOUT terraform:  + binding (known after apply) 2025-07-28 00:01:30.528034 | orchestrator | 00:01:30.527 STDOUT terraform:  + fixed_ip { 2025-07-28 00:01:30.528038 | orchestrator | 00:01:30.527 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-28 00:01:30.528042 | orchestrator | 00:01:30.527 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.528047 | orchestrator | 00:01:30.528 STDOUT terraform:  } 2025-07-28 00:01:30.528051 | orchestrator | 00:01:30.528 STDOUT terraform:  } 2025-07-28 00:01:30.528168 | orchestrator | 00:01:30.528 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-28 00:01:30.528214 | orchestrator | 00:01:30.528 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-28 00:01:30.528224 | orchestrator | 00:01:30.528 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.528273 | orchestrator | 00:01:30.528 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-28 00:01:30.528316 | orchestrator | 00:01:30.528 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-28 00:01:30.528375 | orchestrator | 00:01:30.528 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.528385 | orchestrator | 00:01:30.528 STDOUT terraform:  + device_id = (known after apply) 2025-07-28 00:01:30.528404 | orchestrator | 00:01:30.528 STDOUT terraform:  + device_owner = (known after apply) 2025-07-28 00:01:30.528441 | orchestrator | 00:01:30.528 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-28 00:01:30.528479 | orchestrator | 00:01:30.528 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.528494 | orchestrator | 00:01:30.528 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.528650 | orchestrator | 00:01:30.528 STDOUT terraform:  + mac_address = (known after apply) 2025-07-28 00:01:30.528667 | orchestrator | 00:01:30.528 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.528673 | orchestrator | 00:01:30.528 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.528717 | orchestrator | 00:01:30.528 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.528728 | orchestrator | 00:01:30.528 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.528762 | orchestrator | 00:01:30.528 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-28 00:01:30.528808 | orchestrator | 00:01:30.528 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.528815 | orchestrator | 00:01:30.528 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.528842 | orchestrator | 00:01:30.528 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-28 00:01:30.528849 | orchestrator | 00:01:30.528 STDOUT terraform:  } 2025-07-28 00:01:30.528853 | orchestrator | 00:01:30.528 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.528872 | orchestrator | 00:01:30.528 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-28 00:01:30.528891 | orchestrator | 00:01:30.528 STDOUT terraform:  } 2025-07-28 00:01:30.528932 | orchestrator | 00:01:30.528 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.528960 | orchestrator | 00:01:30.528 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-28 00:01:30.528970 | orchestrator | 00:01:30.528 STDOUT terraform:  } 2025-07-28 00:01:30.529085 | orchestrator | 00:01:30.528 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.529091 | orchestrator | 00:01:30.528 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-28 00:01:30.529096 | orchestrator | 00:01:30.528 STDOUT terraform:  } 2025-07-28 00:01:30.529137 | orchestrator | 00:01:30.528 STDOUT terraform:  + binding (known after apply) 2025-07-28 00:01:30.529155 | orchestrator | 00:01:30.528 STDOUT terraform:  + fixed_ip { 2025-07-28 00:01:30.529189 | orchestrator | 00:01:30.529 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-28 00:01:30.529194 | orchestrator | 00:01:30.529 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.529198 | orchestrator | 00:01:30.529 STDOUT terraform:  } 2025-07-28 00:01:30.529202 | orchestrator | 00:01:30.529 STDOUT terraform:  } 2025-07-28 00:01:30.529228 | orchestrator | 00:01:30.529 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-28 00:01:30.529234 | orchestrator | 00:01:30.529 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-28 00:01:30.529269 | orchestrator | 00:01:30.529 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.529274 | orchestrator | 00:01:30.529 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-28 00:01:30.529282 | orchestrator | 00:01:30.529 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-28 00:01:30.529358 | orchestrator | 00:01:30.529 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.529366 | orchestrator | 00:01:30.529 STDOUT terraform:  + device_id = (known after apply) 2025-07-28 00:01:30.529407 | orchestrator | 00:01:30.529 STDOUT terraform:  + device_owner = (known after apply) 2025-07-28 00:01:30.529483 | orchestrator | 00:01:30.529 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-28 00:01:30.529496 | orchestrator | 00:01:30.529 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.529516 | orchestrator | 00:01:30.529 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.529547 | orchestrator | 00:01:30.529 STDOUT terraform:  + mac_address = (known after apply) 2025-07-28 00:01:30.529551 | orchestrator | 00:01:30.529 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.529595 | orchestrator | 00:01:30.529 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.529620 | orchestrator | 00:01:30.529 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.529680 | orchestrator | 00:01:30.529 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.529765 | orchestrator | 00:01:30.529 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-28 00:01:30.529770 | orchestrator | 00:01:30.529 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.529787 | orchestrator | 00:01:30.529 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.529813 | orchestrator | 00:01:30.529 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-28 00:01:30.529819 | orchestrator | 00:01:30.529 STDOUT terraform:  } 2025-07-28 00:01:30.529847 | orchestrator | 00:01:30.529 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.529879 | orchestrator | 00:01:30.529 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-28 00:01:30.529884 | orchestrator | 00:01:30.529 STDOUT terraform:  } 2025-07-28 00:01:30.529905 | orchestrator | 00:01:30.529 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.529948 | orchestrator | 00:01:30.529 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-28 00:01:30.529954 | orchestrator | 00:01:30.529 STDOUT terraform:  } 2025-07-28 00:01:30.529974 | orchestrator | 00:01:30.529 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.530059 | orchestrator | 00:01:30.529 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-28 00:01:30.530065 | orchestrator | 00:01:30.529 STDOUT terraform:  } 2025-07-28 00:01:30.530071 | orchestrator | 00:01:30.529 STDOUT terraform:  + binding (known after apply) 2025-07-28 00:01:30.530075 | orchestrator | 00:01:30.529 STDOUT terraform:  + fixed_ip { 2025-07-28 00:01:30.530087 | orchestrator | 00:01:30.530 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-28 00:01:30.530093 | orchestrator | 00:01:30.530 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.530096 | orchestrator | 00:01:30.530 STDOUT terraform:  } 2025-07-28 00:01:30.530102 | orchestrator | 00:01:30.530 STDOUT terraform:  } 2025-07-28 00:01:30.530223 | orchestrator | 00:01:30.530 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-28 00:01:30.530242 | orchestrator | 00:01:30.530 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-28 00:01:30.530247 | orchestrator | 00:01:30.530 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.530252 | orchestrator | 00:01:30.530 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-28 00:01:30.530310 | orchestrator | 00:01:30.530 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-28 00:01:30.530318 | orchestrator | 00:01:30.530 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.530362 | orchestrator | 00:01:30.530 STDOUT terraform:  + device_id = (known after apply) 2025-07-28 00:01:30.530390 | orchestrator | 00:01:30.530 STDOUT terraform:  + device_owner = (known after apply) 2025-07-28 00:01:30.530492 | orchestrator | 00:01:30.530 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-28 00:01:30.530552 | orchestrator | 00:01:30.530 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.530591 | orchestrator | 00:01:30.530 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.530615 | orchestrator | 00:01:30.530 STDOUT terraform:  + mac_address = (known after apply) 2025-07-28 00:01:30.530621 | orchestrator | 00:01:30.530 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.530642 | orchestrator | 00:01:30.530 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.530652 | orchestrator | 00:01:30.530 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.530657 | orchestrator | 00:01:30.530 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.530720 | orchestrator | 00:01:30.530 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-28 00:01:30.530759 | orchestrator | 00:01:30.530 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.530807 | orchestrator | 00:01:30.530 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.530827 | orchestrator | 00:01:30.530 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-28 00:01:30.530833 | orchestrator | 00:01:30.530 STDOUT terraform:  } 2025-07-28 00:01:30.530865 | orchestrator | 00:01:30.530 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.530939 | orchestrator | 00:01:30.530 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-28 00:01:30.530955 | orchestrator | 00:01:30.530 STDOUT terraform:  } 2025-07-28 00:01:30.530959 | orchestrator | 00:01:30.530 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.530977 | orchestrator | 00:01:30.530 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-28 00:01:30.530989 | orchestrator | 00:01:30.530 STDOUT terraform:  } 2025-07-28 00:01:30.530993 | orchestrator | 00:01:30.530 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.530997 | orchestrator | 00:01:30.530 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-28 00:01:30.531014 | orchestrator | 00:01:30.530 STDOUT terraform:  } 2025-07-28 00:01:30.531090 | orchestrator | 00:01:30.530 STDOUT terraform:  + binding (known after apply) 2025-07-28 00:01:30.531133 | orchestrator | 00:01:30.531 STDOUT terraform:  + fixed_ip { 2025-07-28 00:01:30.531150 | orchestrator | 00:01:30.531 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-28 00:01:30.531174 | orchestrator | 00:01:30.531 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.531178 | orchestrator | 00:01:30.531 STDOUT terraform:  } 2025-07-28 00:01:30.531182 | orchestrator | 00:01:30.531 STDOUT terraform:  } 2025-07-28 00:01:30.531185 | orchestrator | 00:01:30.531 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-28 00:01:30.531214 | orchestrator | 00:01:30.531 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-28 00:01:30.531218 | orchestrator | 00:01:30.531 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.531244 | orchestrator | 00:01:30.531 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-28 00:01:30.531283 | orchestrator | 00:01:30.531 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-28 00:01:30.531303 | orchestrator | 00:01:30.531 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.531381 | orchestrator | 00:01:30.531 STDOUT terraform:  + device_id = (known after apply) 2025-07-28 00:01:30.531387 | orchestrator | 00:01:30.531 STDOUT terraform:  + device_owner = (known after apply) 2025-07-28 00:01:30.531494 | orchestrator | 00:01:30.531 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-28 00:01:30.531512 | orchestrator | 00:01:30.531 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.531599 | orchestrator | 00:01:30.531 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.531616 | orchestrator | 00:01:30.531 STDOUT terraform:  + mac_address = (known after apply) 2025-07-28 00:01:30.531630 | orchestrator | 00:01:30.531 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.531648 | orchestrator | 00:01:30.531 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.531652 | orchestrator | 00:01:30.531 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.532481 | orchestrator | 00:01:30.532 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.532496 | orchestrator | 00:01:30.532 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-28 00:01:30.532502 | orchestrator | 00:01:30.532 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.532534 | orchestrator | 00:01:30.532 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.532546 | orchestrator | 00:01:30.532 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-28 00:01:30.532561 | orchestrator | 00:01:30.532 STDOUT terraform:  } 2025-07-28 00:01:30.532585 | orchestrator | 00:01:30.532 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.532633 | orchestrator | 00:01:30.532 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-28 00:01:30.532639 | orchestrator | 00:01:30.532 STDOUT terraform:  } 2025-07-28 00:01:30.532645 | orchestrator | 00:01:30.532 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.532674 | orchestrator | 00:01:30.532 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-28 00:01:30.532703 | orchestrator | 00:01:30.532 STDOUT terraform:  } 2025-07-28 00:01:30.532736 | orchestrator | 00:01:30.532 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.532746 | orchestrator | 00:01:30.532 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-28 00:01:30.532776 | orchestrator | 00:01:30.532 STDOUT terraform:  } 2025-07-28 00:01:30.532784 | orchestrator | 00:01:30.532 STDOUT terraform:  + binding (known after apply) 2025-07-28 00:01:30.532796 | orchestrator | 00:01:30.532 STDOUT terraform:  + fixed_ip { 2025-07-28 00:01:30.532826 | orchestrator | 00:01:30.532 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-28 00:01:30.532862 | orchestrator | 00:01:30.532 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.532868 | orchestrator | 00:01:30.532 STDOUT terraform:  } 2025-07-28 00:01:30.532878 | orchestrator | 00:01:30.532 STDOUT terraform:  } 2025-07-28 00:01:30.532942 | orchestrator | 00:01:30.532 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-28 00:01:30.532955 | orchestrator | 00:01:30.532 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-28 00:01:30.533024 | orchestrator | 00:01:30.532 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.533031 | orchestrator | 00:01:30.532 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-28 00:01:30.533059 | orchestrator | 00:01:30.533 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-28 00:01:30.533104 | orchestrator | 00:01:30.533 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.533143 | orchestrator | 00:01:30.533 STDOUT terraform:  + device_id = (known after apply) 2025-07-28 00:01:30.533152 | orchestrator | 00:01:30.533 STDOUT terraform:  + device_owner = (known after apply) 2025-07-28 00:01:30.533204 | orchestrator | 00:01:30.533 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-28 00:01:30.533234 | orchestrator | 00:01:30.533 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.533315 | orchestrator | 00:01:30.533 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.533324 | orchestrator | 00:01:30.533 STDOUT terraform:  + mac_address = (known after apply) 2025-07-28 00:01:30.533330 | orchestrator | 00:01:30.533 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.533404 | orchestrator | 00:01:30.533 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.533418 | orchestrator | 00:01:30.533 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.533424 | orchestrator | 00:01:30.533 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.533466 | orchestrator | 00:01:30.533 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-28 00:01:30.533561 | orchestrator | 00:01:30.533 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.533571 | orchestrator | 00:01:30.533 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.533577 | orchestrator | 00:01:30.533 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-28 00:01:30.533584 | orchestrator | 00:01:30.533 STDOUT terraform:  } 2025-07-28 00:01:30.533593 | orchestrator | 00:01:30.533 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.533599 | orchestrator | 00:01:30.533 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-28 00:01:30.533605 | orchestrator | 00:01:30.533 STDOUT terraform:  } 2025-07-28 00:01:30.533614 | orchestrator | 00:01:30.533 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.533647 | orchestrator | 00:01:30.533 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-28 00:01:30.533658 | orchestrator | 00:01:30.533 STDOUT terraform:  } 2025-07-28 00:01:30.533667 | orchestrator | 00:01:30.533 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.533728 | orchestrator | 00:01:30.533 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-28 00:01:30.533735 | orchestrator | 00:01:30.533 STDOUT terraform:  } 2025-07-28 00:01:30.533743 | orchestrator | 00:01:30.533 STDOUT terraform:  + binding (known after apply) 2025-07-28 00:01:30.533748 | orchestrator | 00:01:30.533 STDOUT terraform:  + fixed_ip { 2025-07-28 00:01:30.533816 | orchestrator | 00:01:30.533 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-28 00:01:30.533882 | orchestrator | 00:01:30.533 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.533887 | orchestrator | 00:01:30.533 STDOUT terraform:  } 2025-07-28 00:01:30.533891 | orchestrator | 00:01:30.533 STDOUT terraform:  } 2025-07-28 00:01:30.533931 | orchestrator | 00:01:30.533 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-28 00:01:30.533984 | orchestrator | 00:01:30.533 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-28 00:01:30.533996 | orchestrator | 00:01:30.533 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.534061 | orchestrator | 00:01:30.533 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-28 00:01:30.534130 | orchestrator | 00:01:30.534 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-28 00:01:30.534142 | orchestrator | 00:01:30.534 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.534192 | orchestrator | 00:01:30.534 STDOUT terraform:  + device_id = (known after apply) 2025-07-28 00:01:30.534203 | orchestrator | 00:01:30.534 STDOUT terraform:  + device_owner = (known after apply) 2025-07-28 00:01:30.534251 | orchestrator | 00:01:30.534 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-28 00:01:30.534275 | orchestrator | 00:01:30.534 STDOUT terraform:  + dns_name = (known after apply) 2025-07-28 00:01:30.534351 | orchestrator | 00:01:30.534 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.534357 | orchestrator | 00:01:30.534 STDOUT terraform:  + mac_address = (known after apply) 2025-07-28 00:01:30.534363 | orchestrator | 00:01:30.534 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.534434 | orchestrator | 00:01:30.534 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-28 00:01:30.534440 | orchestrator | 00:01:30.534 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-28 00:01:30.534488 | orchestrator | 00:01:30.534 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.534495 | orchestrator | 00:01:30.534 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-28 00:01:30.534574 | orchestrator | 00:01:30.534 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.534580 | orchestrator | 00:01:30.534 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.534590 | orchestrator | 00:01:30.534 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-28 00:01:30.534596 | orchestrator | 00:01:30.534 STDOUT terraform:  } 2025-07-28 00:01:30.534630 | orchestrator | 00:01:30.534 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.534643 | orchestrator | 00:01:30.534 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-28 00:01:30.534647 | orchestrator | 00:01:30.534 STDOUT terraform:  } 2025-07-28 00:01:30.534674 | orchestrator | 00:01:30.534 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.534716 | orchestrator | 00:01:30.534 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-28 00:01:30.534724 | orchestrator | 00:01:30.534 STDOUT terraform:  } 2025-07-28 00:01:30.534732 | orchestrator | 00:01:30.534 STDOUT terraform:  + allowed_address_pairs { 2025-07-28 00:01:30.534759 | orchestrator | 00:01:30.534 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-28 00:01:30.534764 | orchestrator | 00:01:30.534 STDOUT terraform:  } 2025-07-28 00:01:30.534785 | orchestrator | 00:01:30.534 STDOUT terraform:  + binding (known after apply) 2025-07-28 00:01:30.534793 | orchestrator | 00:01:30.534 STDOUT terraform:  + fixed_ip { 2025-07-28 00:01:30.534821 | orchestrator | 00:01:30.534 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-28 00:01:30.534851 | orchestrator | 00:01:30.534 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.534858 | orchestrator | 00:01:30.534 STDOUT terraform:  } 2025-07-28 00:01:30.534878 | orchestrator | 00:01:30.534 STDOUT terraform:  } 2025-07-28 00:01:30.534934 | orchestrator | 00:01:30.534 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-28 00:01:30.534981 | orchestrator | 00:01:30.534 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-28 00:01:30.534989 | orchestrator | 00:01:30.534 STDOUT terraform:  + force_destroy = false 2025-07-28 00:01:30.535025 | orchestrator | 00:01:30.534 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.535041 | orchestrator | 00:01:30.535 STDOUT terraform:  + port_id = (known after apply) 2025-07-28 00:01:30.535078 | orchestrator | 00:01:30.535 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.535110 | orchestrator | 00:01:30.535 STDOUT terraform:  + router_id = (known after apply) 2025-07-28 00:01:30.535138 | orchestrator | 00:01:30.535 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-28 00:01:30.535145 | orchestrator | 00:01:30.535 STDOUT terraform:  } 2025-07-28 00:01:30.535192 | orchestrator | 00:01:30.535 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-28 00:01:30.535231 | orchestrator | 00:01:30.535 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-28 00:01:30.535266 | orchestrator | 00:01:30.535 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-28 00:01:30.535303 | orchestrator | 00:01:30.535 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.535324 | orchestrator | 00:01:30.535 STDOUT terraform:  + availability_zone_hints = [ 2025-07-28 00:01:30.535332 | orchestrator | 00:01:30.535 STDOUT terraform:  + "nova", 2025-07-28 00:01:30.535338 | orchestrator | 00:01:30.535 STDOUT terraform:  ] 2025-07-28 00:01:30.535382 | orchestrator | 00:01:30.535 STDOUT terraform:  + distributed = (known after apply) 2025-07-28 00:01:30.535417 | orchestrator | 00:01:30.535 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-28 00:01:30.535467 | orchestrator | 00:01:30.535 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-28 00:01:30.535510 | orchestrator | 00:01:30.535 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-28 00:01:30.535542 | orchestrator | 00:01:30.535 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.535581 | orchestrator | 00:01:30.535 STDOUT terraform:  + name = "testbed" 2025-07-28 00:01:30.535617 | orchestrator | 00:01:30.535 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.535654 | orchestrator | 00:01:30.535 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.535712 | orchestrator | 00:01:30.535 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-28 00:01:30.535719 | orchestrator | 00:01:30.535 STDOUT terraform:  } 2025-07-28 00:01:30.535801 | orchestrator | 00:01:30.535 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-28 00:01:30.535882 | orchestrator | 00:01:30.535 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-28 00:01:30.535893 | orchestrator | 00:01:30.535 STDOUT terraform:  + description = "ssh" 2025-07-28 00:01:30.535932 | orchestrator | 00:01:30.535 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.535944 | orchestrator | 00:01:30.535 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.535995 | orchestrator | 00:01:30.535 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.536004 | orchestrator | 00:01:30.535 STDOUT terraform:  + port_range_max = 22 2025-07-28 00:01:30.536037 | orchestrator | 00:01:30.536 STDOUT terraform:  + port_range_min = 22 2025-07-28 00:01:30.536050 | orchestrator | 00:01:30.536 STDOUT terraform:  + protocol = "tcp" 2025-07-28 00:01:30.536105 | orchestrator | 00:01:30.536 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.536149 | orchestrator | 00:01:30.536 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.536185 | orchestrator | 00:01:30.536 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.536234 | orchestrator | 00:01:30.536 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-28 00:01:30.536272 | orchestrator | 00:01:30.536 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.536311 | orchestrator | 00:01:30.536 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.536318 | orchestrator | 00:01:30.536 STDOUT terraform:  } 2025-07-28 00:01:30.536373 | orchestrator | 00:01:30.536 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-28 00:01:30.536426 | orchestrator | 00:01:30.536 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-28 00:01:30.536457 | orchestrator | 00:01:30.536 STDOUT terraform:  + description = "wireguard" 2025-07-28 00:01:30.536486 | orchestrator | 00:01:30.536 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.536506 | orchestrator | 00:01:30.536 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.536544 | orchestrator | 00:01:30.536 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.536571 | orchestrator | 00:01:30.536 STDOUT terraform:  + port_range_max = 51820 2025-07-28 00:01:30.536592 | orchestrator | 00:01:30.536 STDOUT terraform:  + port_range_min = 51820 2025-07-28 00:01:30.536611 | orchestrator | 00:01:30.536 STDOUT terraform:  + protocol = "udp" 2025-07-28 00:01:30.536651 | orchestrator | 00:01:30.536 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.536702 | orchestrator | 00:01:30.536 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.536737 | orchestrator | 00:01:30.536 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.536767 | orchestrator | 00:01:30.536 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-28 00:01:30.536806 | orchestrator | 00:01:30.536 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.536841 | orchestrator | 00:01:30.536 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.536848 | orchestrator | 00:01:30.536 STDOUT terraform:  } 2025-07-28 00:01:30.536903 | orchestrator | 00:01:30.536 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-28 00:01:30.536960 | orchestrator | 00:01:30.536 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-28 00:01:30.536972 | orchestrator | 00:01:30.536 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.537010 | orchestrator | 00:01:30.536 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.537045 | orchestrator | 00:01:30.536 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.537055 | orchestrator | 00:01:30.537 STDOUT terraform:  + protocol = "tcp" 2025-07-28 00:01:30.537098 | orchestrator | 00:01:30.537 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.537133 | orchestrator | 00:01:30.537 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.537170 | orchestrator | 00:01:30.537 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.537205 | orchestrator | 00:01:30.537 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-28 00:01:30.537242 | orchestrator | 00:01:30.537 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.537275 | orchestrator | 00:01:30.537 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.537283 | orchestrator | 00:01:30.537 STDOUT terraform:  } 2025-07-28 00:01:30.537337 | orchestrator | 00:01:30.537 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-28 00:01:30.537388 | orchestrator | 00:01:30.537 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-28 00:01:30.537423 | orchestrator | 00:01:30.537 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.537432 | orchestrator | 00:01:30.537 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.537473 | orchestrator | 00:01:30.537 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.537484 | orchestrator | 00:01:30.537 STDOUT terraform:  + protocol = "udp" 2025-07-28 00:01:30.537530 | orchestrator | 00:01:30.537 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.537566 | orchestrator | 00:01:30.537 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.537600 | orchestrator | 00:01:30.537 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.537637 | orchestrator | 00:01:30.537 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-28 00:01:30.537673 | orchestrator | 00:01:30.537 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.537730 | orchestrator | 00:01:30.537 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.537739 | orchestrator | 00:01:30.537 STDOUT terraform:  } 2025-07-28 00:01:30.537786 | orchestrator | 00:01:30.537 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-28 00:01:30.537837 | orchestrator | 00:01:30.537 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-28 00:01:30.537867 | orchestrator | 00:01:30.537 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.537886 | orchestrator | 00:01:30.537 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.537930 | orchestrator | 00:01:30.537 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.537973 | orchestrator | 00:01:30.537 STDOUT terraform:  + protocol = "icmp" 2025-07-28 00:01:30.538043 | orchestrator | 00:01:30.537 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.538056 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.538098 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.538128 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-28 00:01:30.538164 | orchestrator | 00:01:30.538 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.538200 | orchestrator | 00:01:30.538 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.538207 | orchestrator | 00:01:30.538 STDOUT terraform:  } 2025-07-28 00:01:30.538263 | orchestrator | 00:01:30.538 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-28 00:01:30.538312 | orchestrator | 00:01:30.538 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-28 00:01:30.538341 | orchestrator | 00:01:30.538 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.538365 | orchestrator | 00:01:30.538 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.538402 | orchestrator | 00:01:30.538 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.538428 | orchestrator | 00:01:30.538 STDOUT terraform:  + protocol = "tcp" 2025-07-28 00:01:30.538466 | orchestrator | 00:01:30.538 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.538502 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.538539 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.538568 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-28 00:01:30.538602 | orchestrator | 00:01:30.538 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.538639 | orchestrator | 00:01:30.538 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.538647 | orchestrator | 00:01:30.538 STDOUT terraform:  } 2025-07-28 00:01:30.538879 | orchestrator | 00:01:30.538 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-28 00:01:30.538972 | orchestrator | 00:01:30.538 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-28 00:01:30.539001 | orchestrator | 00:01:30.538 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.539015 | orchestrator | 00:01:30.538 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.539026 | orchestrator | 00:01:30.538 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.539037 | orchestrator | 00:01:30.538 STDOUT terraform:  + protocol = "udp" 2025-07-28 00:01:30.539048 | orchestrator | 00:01:30.538 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.539059 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.539069 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.539108 | orchestrator | 00:01:30.538 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-28 00:01:30.539120 | orchestrator | 00:01:30.539 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.539130 | orchestrator | 00:01:30.539 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.539141 | orchestrator | 00:01:30.539 STDOUT terraform:  } 2025-07-28 00:01:30.539157 | orchestrator | 00:01:30.539 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-28 00:01:30.539195 | orchestrator | 00:01:30.539 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-28 00:01:30.539211 | orchestrator | 00:01:30.539 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.539260 | orchestrator | 00:01:30.539 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.539278 | orchestrator | 00:01:30.539 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.539292 | orchestrator | 00:01:30.539 STDOUT terraform:  + protocol = "icmp" 2025-07-28 00:01:30.539373 | orchestrator | 00:01:30.539 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.539389 | orchestrator | 00:01:30.539 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.539404 | orchestrator | 00:01:30.539 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.539419 | orchestrator | 00:01:30.539 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-28 00:01:30.539463 | orchestrator | 00:01:30.539 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.539481 | orchestrator | 00:01:30.539 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.539496 | orchestrator | 00:01:30.539 STDOUT terraform:  } 2025-07-28 00:01:30.539555 | orchestrator | 00:01:30.539 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-28 00:01:30.539607 | orchestrator | 00:01:30.539 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-28 00:01:30.539625 | orchestrator | 00:01:30.539 STDOUT terraform:  + description = "vrrp" 2025-07-28 00:01:30.539640 | orchestrator | 00:01:30.539 STDOUT terraform:  + direction = "ingress" 2025-07-28 00:01:30.539659 | orchestrator | 00:01:30.539 STDOUT terraform:  + ethertype = "IPv4" 2025-07-28 00:01:30.539863 | orchestrator | 00:01:30.539 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.539888 | orchestrator | 00:01:30.539 STDOUT terraform:  + protocol = "112" 2025-07-28 00:01:30.539900 | orchestrator | 00:01:30.539 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.539911 | orchestrator | 00:01:30.539 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-28 00:01:30.539922 | orchestrator | 00:01:30.539 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-28 00:01:30.539933 | orchestrator | 00:01:30.539 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-28 00:01:30.539965 | orchestrator | 00:01:30.539 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-28 00:01:30.539976 | orchestrator | 00:01:30.539 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.539987 | orchestrator | 00:01:30.539 STDOUT terraform:  } 2025-07-28 00:01:30.540002 | orchestrator | 00:01:30.539 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-28 00:01:30.540018 | orchestrator | 00:01:30.539 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-28 00:01:30.540063 | orchestrator | 00:01:30.540 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.540081 | orchestrator | 00:01:30.540 STDOUT terraform:  + description = "management security group" 2025-07-28 00:01:30.540121 | orchestrator | 00:01:30.540 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.540138 | orchestrator | 00:01:30.540 STDOUT terraform:  + name = "testbed-management" 2025-07-28 00:01:30.540177 | orchestrator | 00:01:30.540 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.540194 | orchestrator | 00:01:30.540 STDOUT terraform:  + stateful = (known after apply) 2025-07-28 00:01:30.540233 | orchestrator | 00:01:30.540 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.540247 | orchestrator | 00:01:30.540 STDOUT terraform:  } 2025-07-28 00:01:30.540288 | orchestrator | 00:01:30.540 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-28 00:01:30.540333 | orchestrator | 00:01:30.540 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-28 00:01:30.540357 | orchestrator | 00:01:30.540 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.540372 | orchestrator | 00:01:30.540 STDOUT terraform:  + description = "node security group" 2025-07-28 00:01:30.540419 | orchestrator | 00:01:30.540 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.540437 | orchestrator | 00:01:30.540 STDOUT terraform:  + name = "testbed-node" 2025-07-28 00:01:30.540452 | orchestrator | 00:01:30.540 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.540492 | orchestrator | 00:01:30.540 STDOUT terraform:  + stateful = (known after apply) 2025-07-28 00:01:30.540509 | orchestrator | 00:01:30.540 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.540524 | orchestrator | 00:01:30.540 STDOUT terraform:  } 2025-07-28 00:01:30.540569 | orchestrator | 00:01:30.540 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-28 00:01:30.541523 | orchestrator | 00:01:30.540 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-28 00:01:30.541567 | orchestrator | 00:01:30.540 STDOUT terraform:  + all_tags = (known after apply) 2025-07-28 00:01:30.541777 | orchestrator | 00:01:30.540 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-28 00:01:30.542223 | orchestrator | 00:01:30.540 STDOUT terraform:  + dns_nameservers = [ 2025-07-28 00:01:30.542738 | orchestrator | 00:01:30.540 STDOUT terraform:  + "8.8.8.8", 2025-07-28 00:01:30.542974 | orchestrator | 00:01:30.540 STDOUT terraform:  + "9.9.9.9", 2025-07-28 00:01:30.543002 | orchestrator | 00:01:30.540 STDOUT terraform:  ] 2025-07-28 00:01:30.543014 | orchestrator | 00:01:30.540 STDOUT terraform:  + enable_dhcp = true 2025-07-28 00:01:30.543025 | orchestrator | 00:01:30.540 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-28 00:01:30.543037 | orchestrator | 00:01:30.540 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.543048 | orchestrator | 00:01:30.540 STDOUT terraform:  + ip_version = 4 2025-07-28 00:01:30.543059 | orchestrator | 00:01:30.540 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-28 00:01:30.543070 | orchestrator | 00:01:30.540 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-28 00:01:30.543081 | orchestrator | 00:01:30.540 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-28 00:01:30.543092 | orchestrator | 00:01:30.540 STDOUT terraform:  + network_id = (known after apply) 2025-07-28 00:01:30.543103 | orchestrator | 00:01:30.540 STDOUT terraform:  + no_gateway = false 2025-07-28 00:01:30.543113 | orchestrator | 00:01:30.540 STDOUT terraform:  + region = (known after apply) 2025-07-28 00:01:30.543124 | orchestrator | 00:01:30.540 STDOUT terraform:  + service_types = (known after apply) 2025-07-28 00:01:30.543135 | orchestrator | 00:01:30.540 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-28 00:01:30.543146 | orchestrator | 00:01:30.540 STDOUT terraform:  + allocation_pool { 2025-07-28 00:01:30.543157 | orchestrator | 00:01:30.540 STDOUT terraform:  + end = "192.168.31.250" 2025-07-28 00:01:30.543167 | orchestrator | 00:01:30.541 STDOUT terraform:  + start = "192.168.31.200" 2025-07-28 00:01:30.543178 | orchestrator | 00:01:30.541 STDOUT terraform:  } 2025-07-28 00:01:30.543190 | orchestrator | 00:01:30.541 STDOUT terraform:  } 2025-07-28 00:01:30.543201 | orchestrator | 00:01:30.541 STDOUT terraform:  # terraform_data.image will be created 2025-07-28 00:01:30.543212 | orchestrator | 00:01:30.541 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-28 00:01:30.543223 | orchestrator | 00:01:30.541 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.543233 | orchestrator | 00:01:30.541 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-28 00:01:30.543244 | orchestrator | 00:01:30.541 STDOUT terraform:  + output = (known after apply) 2025-07-28 00:01:30.543255 | orchestrator | 00:01:30.541 STDOUT terraform:  } 2025-07-28 00:01:30.543266 | orchestrator | 00:01:30.541 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-28 00:01:30.543284 | orchestrator | 00:01:30.541 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-28 00:01:30.543295 | orchestrator | 00:01:30.541 STDOUT terraform:  + id = (known after apply) 2025-07-28 00:01:30.543306 | orchestrator | 00:01:30.541 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-28 00:01:30.543318 | orchestrator | 00:01:30.541 STDOUT terraform:  + output = (known after apply) 2025-07-28 00:01:30.543329 | orchestrator | 00:01:30.541 STDOUT terraform:  } 2025-07-28 00:01:30.543340 | orchestrator | 00:01:30.541 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-28 00:01:30.543351 | orchestrator | 00:01:30.541 STDOUT terraform: Changes to Outputs: 2025-07-28 00:01:30.543958 | orchestrator | 00:01:30.541 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-28 00:01:30.544632 | orchestrator | 00:01:30.541 STDOUT terraform:  + private_key = (sensitive value) 2025-07-28 00:01:30.821881 | orchestrator | 00:01:30.821 STDOUT terraform: terraform_data.image: Creating... 2025-07-28 00:01:30.821983 | orchestrator | 00:01:30.821 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=39966d85-1fc7-4fd1-c81e-cf5905197fe9] 2025-07-28 00:01:30.821994 | orchestrator | 00:01:30.821 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-28 00:01:30.822254 | orchestrator | 00:01:30.822 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=a37d4536-a7e6-6ceb-937f-9bd01c0ec063] 2025-07-28 00:01:30.844641 | orchestrator | 00:01:30.844 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-28 00:01:30.844784 | orchestrator | 00:01:30.844 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-28 00:01:30.851725 | orchestrator | 00:01:30.851 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-28 00:01:30.863024 | orchestrator | 00:01:30.862 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-28 00:01:30.865095 | orchestrator | 00:01:30.865 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-28 00:01:30.866637 | orchestrator | 00:01:30.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-28 00:01:30.868127 | orchestrator | 00:01:30.867 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-28 00:01:30.868981 | orchestrator | 00:01:30.868 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-28 00:01:30.871676 | orchestrator | 00:01:30.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-28 00:01:30.895573 | orchestrator | 00:01:30.894 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-28 00:01:31.357919 | orchestrator | 00:01:31.357 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-28 00:01:31.365120 | orchestrator | 00:01:31.364 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-28 00:01:31.368323 | orchestrator | 00:01:31.368 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-28 00:01:31.370427 | orchestrator | 00:01:31.370 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-28 00:01:31.435739 | orchestrator | 00:01:31.435 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-28 00:01:31.445302 | orchestrator | 00:01:31.445 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-28 00:01:31.870139 | orchestrator | 00:01:31.869 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=4a2e75eb-2849-4399-af90-c1c5b45ebb33] 2025-07-28 00:01:31.877967 | orchestrator | 00:01:31.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-28 00:01:34.549567 | orchestrator | 00:01:34.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=28caf88b-bc35-4cac-895f-df9b77e30b76] 2025-07-28 00:01:34.557226 | orchestrator | 00:01:34.556 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-28 00:01:34.561145 | orchestrator | 00:01:34.560 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=41c529bf-a7e7-4c84-b89f-23c876ca70cc] 2025-07-28 00:01:34.569745 | orchestrator | 00:01:34.569 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-28 00:01:34.574402 | orchestrator | 00:01:34.573 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=a3b726cb-2ea6-402f-a069-0828b4339d0c] 2025-07-28 00:01:34.586389 | orchestrator | 00:01:34.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=2ac9c5b8-61b2-4e98-916e-627675956336] 2025-07-28 00:01:34.591787 | orchestrator | 00:01:34.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-28 00:01:34.591989 | orchestrator | 00:01:34.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-28 00:01:34.593153 | orchestrator | 00:01:34.592 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=dcc2fd7d-7aa2-42a4-8433-66f177a45eb8] 2025-07-28 00:01:34.599659 | orchestrator | 00:01:34.599 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-28 00:01:34.615382 | orchestrator | 00:01:34.615 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=14f898a3-e75a-4799-87f6-ec2412f21431] 2025-07-28 00:01:34.624579 | orchestrator | 00:01:34.624 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-28 00:01:34.649559 | orchestrator | 00:01:34.649 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=6a2999bf-6684-4e70-9ce8-3ae520aac9e1] 2025-07-28 00:01:34.665463 | orchestrator | 00:01:34.665 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-28 00:01:34.671356 | orchestrator | 00:01:34.671 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=9005223848438e76b1cf90a8e63b460ca5a5769a] 2025-07-28 00:01:34.672926 | orchestrator | 00:01:34.672 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=61e9691c-3292-4aed-a1f5-be956cef0066] 2025-07-28 00:01:34.680628 | orchestrator | 00:01:34.680 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-28 00:01:34.680981 | orchestrator | 00:01:34.680 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-28 00:01:34.689984 | orchestrator | 00:01:34.689 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=fbf928d3f01ab60d6fd28837d83b705d8c0f5c90] 2025-07-28 00:01:34.701458 | orchestrator | 00:01:34.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=9b89dcae-29e1-4615-98ea-fea40a75843a] 2025-07-28 00:01:35.228559 | orchestrator | 00:01:35.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=e2e22994-3c08-4cd8-9b1f-0fee5ff2adde] 2025-07-28 00:01:35.692291 | orchestrator | 00:01:35.691 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=7889485b-8f51-4a47-928d-fc687d0f8b78] 2025-07-28 00:01:35.701983 | orchestrator | 00:01:35.701 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-28 00:01:37.970819 | orchestrator | 00:01:37.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=403b79c5-b0df-4e9b-ab84-c6f050c8ca4e] 2025-07-28 00:01:37.996617 | orchestrator | 00:01:37.996 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=63c20d97-a2c3-4ed6-bc40-11b455a9ae1f] 2025-07-28 00:01:38.022010 | orchestrator | 00:01:38.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=1982c613-1456-49be-9178-206532ec7577] 2025-07-28 00:01:38.029390 | orchestrator | 00:01:38.029 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=eecc6f1e-dabe-41b9-8738-5a9b20a370e0] 2025-07-28 00:01:38.085590 | orchestrator | 00:01:38.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=bd8ad188-2c22-44da-917e-7695de5b6158] 2025-07-28 00:01:38.280743 | orchestrator | 00:01:38.280 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=24b4d88c-f671-473b-8e89-883efc3c7537] 2025-07-28 00:01:38.547833 | orchestrator | 00:01:38.547 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=f75ff676-2771-41ca-af1a-c04a9fa36ae8] 2025-07-28 00:01:38.555146 | orchestrator | 00:01:38.554 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-28 00:01:38.557233 | orchestrator | 00:01:38.556 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-28 00:01:38.568009 | orchestrator | 00:01:38.567 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-28 00:01:38.737140 | orchestrator | 00:01:38.736 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=88eeff4a-c697-4729-b177-c818eb36c760] 2025-07-28 00:01:38.750528 | orchestrator | 00:01:38.750 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-28 00:01:38.751894 | orchestrator | 00:01:38.751 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-28 00:01:38.753778 | orchestrator | 00:01:38.753 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-28 00:01:38.754101 | orchestrator | 00:01:38.753 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-28 00:01:38.757893 | orchestrator | 00:01:38.757 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-28 00:01:38.766723 | orchestrator | 00:01:38.766 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-28 00:01:38.767308 | orchestrator | 00:01:38.767 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-28 00:01:38.769241 | orchestrator | 00:01:38.769 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-28 00:01:38.915332 | orchestrator | 00:01:38.914 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5958effd-53f2-4370-8c2a-84a412dde840] 2025-07-28 00:01:38.929502 | orchestrator | 00:01:38.929 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-28 00:01:39.131273 | orchestrator | 00:01:39.130 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=e14d7ecc-65ee-4671-9e4c-0b6a21b93201] 2025-07-28 00:01:39.147990 | orchestrator | 00:01:39.147 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-28 00:01:39.313442 | orchestrator | 00:01:39.313 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=61a482e4-53fc-4716-aa31-a69bbb6ad53b] 2025-07-28 00:01:39.314720 | orchestrator | 00:01:39.314 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4a59edbc-a8dd-4ea9-b429-aeb3eabac647] 2025-07-28 00:01:39.326996 | orchestrator | 00:01:39.326 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-28 00:01:39.328815 | orchestrator | 00:01:39.328 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-28 00:01:39.496255 | orchestrator | 00:01:39.495 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=33297e72-f490-4c63-9b54-9b0632a7e9a3] 2025-07-28 00:01:39.502553 | orchestrator | 00:01:39.502 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-28 00:01:39.512984 | orchestrator | 00:01:39.512 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=2585180d-6be2-4784-b01a-d7160ec7a141] 2025-07-28 00:01:39.524216 | orchestrator | 00:01:39.523 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=2db936ae-0bd1-4a0f-8673-2a5e1454587b] 2025-07-28 00:01:39.529812 | orchestrator | 00:01:39.529 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-28 00:01:39.530572 | orchestrator | 00:01:39.530 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-28 00:01:39.551862 | orchestrator | 00:01:39.551 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=9753653b-b988-438a-a5c4-42693092da0b] 2025-07-28 00:01:39.558199 | orchestrator | 00:01:39.557 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-28 00:01:39.692555 | orchestrator | 00:01:39.692 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=cd1a88e1-2912-47ee-977b-b19b39a81dbd] 2025-07-28 00:01:39.836181 | orchestrator | 00:01:39.835 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=f00417e1-da38-4bc3-b73e-980f49353adf] 2025-07-28 00:01:39.963314 | orchestrator | 00:01:39.962 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=cae88ddc-74aa-4978-bf3f-80aa8d1f2b19] 2025-07-28 00:01:40.044571 | orchestrator | 00:01:40.044 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=dcf9d640-ab32-4e18-803c-53e8cc91ccc8] 2025-07-28 00:01:40.206753 | orchestrator | 00:01:40.206 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=a6bdb119-1253-46cc-8c80-23f29fa467bf] 2025-07-28 00:01:40.245816 | orchestrator | 00:01:40.245 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=3c372766-3e54-4b8d-87ac-101ba34c6670] 2025-07-28 00:01:40.360530 | orchestrator | 00:01:40.360 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=9c6ba0e4-bf49-49d8-a193-8f10a86d723c] 2025-07-28 00:01:40.368792 | orchestrator | 00:01:40.368 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=ccad39cd-65e8-4e85-87bb-4be6a2e316ce] 2025-07-28 00:01:40.668407 | orchestrator | 00:01:40.667 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=05670a2a-a866-436f-9111-8f3df0ca3a34] 2025-07-28 00:01:41.844584 | orchestrator | 00:01:41.844 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=90896180-8a97-4c19-a40b-6c4f06bcee60] 2025-07-28 00:01:41.870775 | orchestrator | 00:01:41.870 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-28 00:01:41.884751 | orchestrator | 00:01:41.884 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-28 00:01:41.886365 | orchestrator | 00:01:41.886 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-28 00:01:41.903071 | orchestrator | 00:01:41.902 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-28 00:01:41.904042 | orchestrator | 00:01:41.903 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-28 00:01:41.904618 | orchestrator | 00:01:41.904 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-28 00:01:41.907388 | orchestrator | 00:01:41.907 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-28 00:01:44.102279 | orchestrator | 00:01:44.101 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=b8ae70bd-b6d4-4b20-8ebe-c462eb307c32] 2025-07-28 00:01:44.111491 | orchestrator | 00:01:44.111 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-28 00:01:44.119046 | orchestrator | 00:01:44.118 STDOUT terraform: local_file.inventory: Creating... 2025-07-28 00:01:44.122811 | orchestrator | 00:01:44.122 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-28 00:01:44.127610 | orchestrator | 00:01:44.127 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=b27f07ea01d3d4ff4ff7fb01c6646cb16e9c92ad] 2025-07-28 00:01:44.129787 | orchestrator | 00:01:44.129 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fcc7dfd144b6f9fee077d6525a260b194a85fbf2] 2025-07-28 00:01:44.864137 | orchestrator | 00:01:44.863 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=b8ae70bd-b6d4-4b20-8ebe-c462eb307c32] 2025-07-28 00:01:51.886354 | orchestrator | 00:01:51.885 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-28 00:01:51.887636 | orchestrator | 00:01:51.887 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-28 00:01:51.904996 | orchestrator | 00:01:51.904 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-28 00:01:51.905095 | orchestrator | 00:01:51.904 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-28 00:01:51.905187 | orchestrator | 00:01:51.905 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-28 00:01:51.908176 | orchestrator | 00:01:51.907 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-28 00:02:01.887538 | orchestrator | 00:02:01.887 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-28 00:02:01.888417 | orchestrator | 00:02:01.888 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-28 00:02:01.905598 | orchestrator | 00:02:01.905 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-28 00:02:01.905849 | orchestrator | 00:02:01.905 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-28 00:02:01.906144 | orchestrator | 00:02:01.905 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-28 00:02:01.908765 | orchestrator | 00:02:01.908 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-28 00:02:11.890317 | orchestrator | 00:02:11.889 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-28 00:02:11.890453 | orchestrator | 00:02:11.890 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-28 00:02:11.906573 | orchestrator | 00:02:11.906 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-28 00:02:11.906701 | orchestrator | 00:02:11.906 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-28 00:02:11.906896 | orchestrator | 00:02:11.906 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-28 00:02:11.909694 | orchestrator | 00:02:11.909 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-28 00:02:12.499480 | orchestrator | 00:02:12.499 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=b3dac18a-5003-4916-9352-b4a3f44596d2] 2025-07-28 00:02:12.537714 | orchestrator | 00:02:12.537 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=bf026344-b863-49e9-b741-0c2b3d99dd75] 2025-07-28 00:02:12.593803 | orchestrator | 00:02:12.593 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=3e7623a9-e7f3-451f-9fd9-c51ad4df9e28] 2025-07-28 00:02:21.910121 | orchestrator | 00:02:21.909 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-07-28 00:02:21.910255 | orchestrator | 00:02:21.909 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-07-28 00:02:21.910278 | orchestrator | 00:02:21.910 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-07-28 00:02:22.588946 | orchestrator | 00:02:22.588 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=1d14a153-44af-40d9-96db-32a0e2a3e616] 2025-07-28 00:02:22.873626 | orchestrator | 00:02:22.873 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=ef2ae649-24eb-4c48-a3af-97ecbc67c7ea] 2025-07-28 00:02:22.908019 | orchestrator | 00:02:22.907 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=5411c88e-f153-4bd4-8d15-0f68a529c514] 2025-07-28 00:02:22.926072 | orchestrator | 00:02:22.925 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-28 00:02:22.942257 | orchestrator | 00:02:22.942 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=540043839823007254] 2025-07-28 00:02:22.942325 | orchestrator | 00:02:22.942 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-28 00:02:22.942349 | orchestrator | 00:02:22.942 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-28 00:02:22.942410 | orchestrator | 00:02:22.942 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-28 00:02:22.949810 | orchestrator | 00:02:22.949 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-28 00:02:22.952734 | orchestrator | 00:02:22.952 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-28 00:02:22.958857 | orchestrator | 00:02:22.958 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-28 00:02:22.959687 | orchestrator | 00:02:22.959 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-28 00:02:22.968128 | orchestrator | 00:02:22.967 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-28 00:02:22.990979 | orchestrator | 00:02:22.990 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-28 00:02:22.997803 | orchestrator | 00:02:22.997 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-28 00:02:26.359327 | orchestrator | 00:02:26.358 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=bf026344-b863-49e9-b741-0c2b3d99dd75/dcc2fd7d-7aa2-42a4-8433-66f177a45eb8] 2025-07-28 00:02:26.366515 | orchestrator | 00:02:26.366 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=5411c88e-f153-4bd4-8d15-0f68a529c514/14f898a3-e75a-4799-87f6-ec2412f21431] 2025-07-28 00:02:26.392104 | orchestrator | 00:02:26.391 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=ef2ae649-24eb-4c48-a3af-97ecbc67c7ea/6a2999bf-6684-4e70-9ce8-3ae520aac9e1] 2025-07-28 00:02:26.409859 | orchestrator | 00:02:26.409 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=bf026344-b863-49e9-b741-0c2b3d99dd75/41c529bf-a7e7-4c84-b89f-23c876ca70cc] 2025-07-28 00:02:26.411035 | orchestrator | 00:02:26.410 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=5411c88e-f153-4bd4-8d15-0f68a529c514/9b89dcae-29e1-4615-98ea-fea40a75843a] 2025-07-28 00:02:26.455585 | orchestrator | 00:02:26.455 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=ef2ae649-24eb-4c48-a3af-97ecbc67c7ea/28caf88b-bc35-4cac-895f-df9b77e30b76] 2025-07-28 00:02:32.520921 | orchestrator | 00:02:32.520 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=5411c88e-f153-4bd4-8d15-0f68a529c514/2ac9c5b8-61b2-4e98-916e-627675956336] 2025-07-28 00:02:32.529657 | orchestrator | 00:02:32.529 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=bf026344-b863-49e9-b741-0c2b3d99dd75/a3b726cb-2ea6-402f-a069-0828b4339d0c] 2025-07-28 00:02:32.554098 | orchestrator | 00:02:32.553 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=ef2ae649-24eb-4c48-a3af-97ecbc67c7ea/61e9691c-3292-4aed-a1f5-be956cef0066] 2025-07-28 00:02:32.999533 | orchestrator | 00:02:32.999 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-28 00:02:42.999927 | orchestrator | 00:02:42.999 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-28 00:02:43.720729 | orchestrator | 00:02:43.720 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=e73bd177-bac7-46c8-9865-14574c8f6ca7] 2025-07-28 00:02:44.036466 | orchestrator | 00:02:44.036 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-28 00:02:44.036581 | orchestrator | 00:02:44.036 STDOUT terraform: Outputs: 2025-07-28 00:02:44.036595 | orchestrator | 00:02:44.036 STDOUT terraform: manager_address = 2025-07-28 00:02:44.036603 | orchestrator | 00:02:44.036 STDOUT terraform: private_key = 2025-07-28 00:02:44.289247 | orchestrator | ok: Runtime: 0:01:20.389151 2025-07-28 00:02:44.311679 | 2025-07-28 00:02:44.311779 | TASK [Create infrastructure (stable)] 2025-07-28 00:02:44.842186 | orchestrator | skipping: Conditional result was False 2025-07-28 00:02:44.858048 | 2025-07-28 00:02:44.858181 | TASK [Fetch manager address] 2025-07-28 00:02:45.286562 | orchestrator | ok 2025-07-28 00:02:45.293694 | 2025-07-28 00:02:45.293801 | TASK [Set manager_host address] 2025-07-28 00:02:45.369788 | orchestrator | ok 2025-07-28 00:02:45.382658 | 2025-07-28 00:02:45.382800 | LOOP [Update ansible collections] 2025-07-28 00:02:47.087218 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-28 00:02:47.087842 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-28 00:02:47.088039 | orchestrator | Starting galaxy collection install process 2025-07-28 00:02:47.088083 | orchestrator | Process install dependency map 2025-07-28 00:02:47.088421 | orchestrator | Starting collection install process 2025-07-28 00:02:47.088800 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-07-28 00:02:47.089085 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-07-28 00:02:47.089149 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-28 00:02:47.089225 | orchestrator | ok: Item: commons Runtime: 0:00:01.369456 2025-07-28 00:02:48.806569 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-28 00:02:48.806712 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-28 00:02:48.806763 | orchestrator | Starting galaxy collection install process 2025-07-28 00:02:48.806805 | orchestrator | Process install dependency map 2025-07-28 00:02:48.806865 | orchestrator | Starting collection install process 2025-07-28 00:02:48.806902 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-07-28 00:02:48.806937 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-07-28 00:02:48.806971 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-28 00:02:48.807024 | orchestrator | ok: Item: services Runtime: 0:00:01.445411 2025-07-28 00:02:48.820805 | 2025-07-28 00:02:48.820921 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-28 00:02:59.356598 | orchestrator | ok 2025-07-28 00:02:59.368327 | 2025-07-28 00:02:59.368491 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-28 00:03:59.412269 | orchestrator | ok 2025-07-28 00:03:59.422090 | 2025-07-28 00:03:59.422213 | TASK [Fetch manager ssh hostkey] 2025-07-28 00:04:00.994268 | orchestrator | Output suppressed because no_log was given 2025-07-28 00:04:01.010605 | 2025-07-28 00:04:01.010877 | TASK [Get ssh keypair from terraform environment] 2025-07-28 00:04:01.557261 | orchestrator | ok: Runtime: 0:00:00.011127 2025-07-28 00:04:01.574422 | 2025-07-28 00:04:01.574599 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-28 00:04:01.624904 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-28 00:04:01.635541 | 2025-07-28 00:04:01.635699 | TASK [Run manager part 0] 2025-07-28 00:04:02.627424 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-28 00:04:02.680644 | orchestrator | 2025-07-28 00:04:02.680688 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-28 00:04:02.680695 | orchestrator | 2025-07-28 00:04:02.680708 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-28 00:04:04.463386 | orchestrator | ok: [testbed-manager] 2025-07-28 00:04:04.463441 | orchestrator | 2025-07-28 00:04:04.463464 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-28 00:04:04.463474 | orchestrator | 2025-07-28 00:04:04.463485 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:04:06.629485 | orchestrator | ok: [testbed-manager] 2025-07-28 00:04:06.629530 | orchestrator | 2025-07-28 00:04:06.629537 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-28 00:04:07.329808 | orchestrator | ok: [testbed-manager] 2025-07-28 00:04:07.329963 | orchestrator | 2025-07-28 00:04:07.329984 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-28 00:04:07.386200 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:04:07.386251 | orchestrator | 2025-07-28 00:04:07.386263 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-28 00:04:07.419089 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:04:07.419181 | orchestrator | 2025-07-28 00:04:07.419219 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-28 00:04:07.463870 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:04:07.463960 | orchestrator | 2025-07-28 00:04:07.463976 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-28 00:04:07.511507 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:04:07.511616 | orchestrator | 2025-07-28 00:04:07.511632 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-28 00:04:07.558285 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:04:07.558352 | orchestrator | 2025-07-28 00:04:07.558449 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-28 00:04:07.603747 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:04:07.603836 | orchestrator | 2025-07-28 00:04:07.603858 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-28 00:04:07.649329 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:04:07.649387 | orchestrator | 2025-07-28 00:04:07.649398 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-28 00:04:08.393444 | orchestrator | changed: [testbed-manager] 2025-07-28 00:04:08.393519 | orchestrator | 2025-07-28 00:04:08.393529 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-28 00:06:38.260551 | orchestrator | changed: [testbed-manager] 2025-07-28 00:06:38.260640 | orchestrator | 2025-07-28 00:06:38.260659 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-28 00:07:59.201740 | orchestrator | changed: [testbed-manager] 2025-07-28 00:07:59.201837 | orchestrator | 2025-07-28 00:07:59.201858 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-28 00:08:25.106171 | orchestrator | changed: [testbed-manager] 2025-07-28 00:08:25.106265 | orchestrator | 2025-07-28 00:08:25.106284 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-28 00:08:34.584617 | orchestrator | changed: [testbed-manager] 2025-07-28 00:08:34.584662 | orchestrator | 2025-07-28 00:08:34.584670 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-28 00:08:34.633504 | orchestrator | ok: [testbed-manager] 2025-07-28 00:08:34.633543 | orchestrator | 2025-07-28 00:08:34.633551 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-28 00:08:35.428753 | orchestrator | ok: [testbed-manager] 2025-07-28 00:08:35.429561 | orchestrator | 2025-07-28 00:08:35.429594 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-28 00:08:36.167657 | orchestrator | changed: [testbed-manager] 2025-07-28 00:08:36.167713 | orchestrator | 2025-07-28 00:08:36.167722 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-28 00:08:42.690869 | orchestrator | changed: [testbed-manager] 2025-07-28 00:08:42.690938 | orchestrator | 2025-07-28 00:08:42.690979 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-28 00:08:50.544707 | orchestrator | changed: [testbed-manager] 2025-07-28 00:08:50.544750 | orchestrator | 2025-07-28 00:08:50.544759 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-28 00:08:53.348087 | orchestrator | changed: [testbed-manager] 2025-07-28 00:08:53.348182 | orchestrator | 2025-07-28 00:08:53.348203 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-28 00:08:55.189764 | orchestrator | changed: [testbed-manager] 2025-07-28 00:08:55.189854 | orchestrator | 2025-07-28 00:08:55.189869 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-28 00:08:56.315879 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-28 00:08:56.315969 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-28 00:08:56.315985 | orchestrator | 2025-07-28 00:08:56.315997 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-28 00:08:56.360187 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-28 00:08:56.360272 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-28 00:08:56.360289 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-28 00:08:56.360303 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-28 00:09:02.798660 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-28 00:09:02.798698 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-28 00:09:02.798703 | orchestrator | 2025-07-28 00:09:02.798708 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-28 00:09:03.368750 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:03.368841 | orchestrator | 2025-07-28 00:09:03.368860 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-28 00:09:22.720713 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-28 00:09:22.720826 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-28 00:09:22.720846 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-28 00:09:22.720859 | orchestrator | 2025-07-28 00:09:22.720872 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-28 00:09:25.089410 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-28 00:09:25.089447 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-28 00:09:25.089453 | orchestrator | 2025-07-28 00:09:25.089458 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-28 00:09:25.089464 | orchestrator | 2025-07-28 00:09:25.089468 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:09:26.512708 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:26.512796 | orchestrator | 2025-07-28 00:09:26.512816 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-28 00:09:26.577063 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:26.577153 | orchestrator | 2025-07-28 00:09:26.577170 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-28 00:09:26.658805 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:26.659674 | orchestrator | 2025-07-28 00:09:26.659705 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-28 00:09:27.487986 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:27.488076 | orchestrator | 2025-07-28 00:09:27.488094 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-28 00:09:28.251774 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:28.251867 | orchestrator | 2025-07-28 00:09:28.251884 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-28 00:09:29.670555 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-28 00:09:29.670626 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-28 00:09:29.670641 | orchestrator | 2025-07-28 00:09:29.670671 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-28 00:09:31.100482 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:31.100603 | orchestrator | 2025-07-28 00:09:31.100620 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-28 00:09:33.054751 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-28 00:09:33.054836 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-28 00:09:33.054857 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-28 00:09:33.054875 | orchestrator | 2025-07-28 00:09:33.054894 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-28 00:09:33.122275 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:33.122396 | orchestrator | 2025-07-28 00:09:33.122414 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-28 00:09:33.739231 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:33.739340 | orchestrator | 2025-07-28 00:09:33.739359 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-28 00:09:33.811869 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:33.811957 | orchestrator | 2025-07-28 00:09:33.811974 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-28 00:09:34.733221 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-28 00:09:34.733330 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:34.733349 | orchestrator | 2025-07-28 00:09:34.733364 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-28 00:09:34.774215 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:34.774340 | orchestrator | 2025-07-28 00:09:34.774358 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-28 00:09:34.814960 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:34.815055 | orchestrator | 2025-07-28 00:09:34.815073 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-28 00:09:34.851145 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:34.851330 | orchestrator | 2025-07-28 00:09:34.851353 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-28 00:09:34.904439 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:34.904517 | orchestrator | 2025-07-28 00:09:34.904533 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-28 00:09:35.667428 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:35.667514 | orchestrator | 2025-07-28 00:09:35.667530 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-28 00:09:35.667542 | orchestrator | 2025-07-28 00:09:35.667553 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:09:37.094687 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:37.095578 | orchestrator | 2025-07-28 00:09:37.095642 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-28 00:09:38.062169 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:38.062207 | orchestrator | 2025-07-28 00:09:38.062213 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-28 00:09:38.062218 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-28 00:09:38.062222 | orchestrator | 2025-07-28 00:09:38.377782 | orchestrator | ok: Runtime: 0:05:36.233249 2025-07-28 00:09:38.398544 | 2025-07-28 00:09:38.398712 | TASK [Point out that the log in on the manager is now possible] 2025-07-28 00:09:38.435797 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-28 00:09:38.444954 | 2025-07-28 00:09:38.445070 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-28 00:09:38.492058 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-28 00:09:38.502110 | 2025-07-28 00:09:38.502238 | TASK [Run manager part 1 + 2] 2025-07-28 00:09:39.388848 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-28 00:09:39.453974 | orchestrator | 2025-07-28 00:09:39.454055 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-28 00:09:39.454063 | orchestrator | 2025-07-28 00:09:39.454076 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:09:42.119648 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:42.120700 | orchestrator | 2025-07-28 00:09:42.120741 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-28 00:09:42.163943 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:42.164001 | orchestrator | 2025-07-28 00:09:42.164013 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-28 00:09:42.214657 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:42.214709 | orchestrator | 2025-07-28 00:09:42.214718 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-28 00:09:42.269206 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:42.269258 | orchestrator | 2025-07-28 00:09:42.269268 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-28 00:09:42.341266 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:42.341374 | orchestrator | 2025-07-28 00:09:42.341385 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-28 00:09:42.422570 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:42.422628 | orchestrator | 2025-07-28 00:09:42.422639 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-28 00:09:42.480068 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-28 00:09:42.480112 | orchestrator | 2025-07-28 00:09:42.480117 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-28 00:09:43.225619 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:43.225671 | orchestrator | 2025-07-28 00:09:43.225680 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-28 00:09:43.280963 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:09:43.281011 | orchestrator | 2025-07-28 00:09:43.281018 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-28 00:09:44.711404 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:44.711479 | orchestrator | 2025-07-28 00:09:44.711490 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-28 00:09:45.324819 | orchestrator | ok: [testbed-manager] 2025-07-28 00:09:45.324872 | orchestrator | 2025-07-28 00:09:45.324880 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-28 00:09:46.506899 | orchestrator | changed: [testbed-manager] 2025-07-28 00:09:46.506955 | orchestrator | 2025-07-28 00:09:46.506964 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-28 00:10:02.984769 | orchestrator | changed: [testbed-manager] 2025-07-28 00:10:02.984858 | orchestrator | 2025-07-28 00:10:02.984875 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-28 00:10:03.681221 | orchestrator | ok: [testbed-manager] 2025-07-28 00:10:03.681393 | orchestrator | 2025-07-28 00:10:03.681425 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-28 00:10:03.743532 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:10:03.743602 | orchestrator | 2025-07-28 00:10:03.743611 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-28 00:10:04.764544 | orchestrator | changed: [testbed-manager] 2025-07-28 00:10:04.764630 | orchestrator | 2025-07-28 00:10:04.764646 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-28 00:10:05.749454 | orchestrator | changed: [testbed-manager] 2025-07-28 00:10:05.749536 | orchestrator | 2025-07-28 00:10:05.749551 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-28 00:10:06.331681 | orchestrator | changed: [testbed-manager] 2025-07-28 00:10:06.331757 | orchestrator | 2025-07-28 00:10:06.331773 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-28 00:10:06.370885 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-28 00:10:06.370988 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-28 00:10:06.371005 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-28 00:10:06.371018 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-28 00:10:08.646745 | orchestrator | changed: [testbed-manager] 2025-07-28 00:10:08.646826 | orchestrator | 2025-07-28 00:10:08.646841 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-28 00:10:17.635097 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-28 00:10:17.635199 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-28 00:10:17.635218 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-28 00:10:17.635230 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-28 00:10:17.635249 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-28 00:10:17.635291 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-28 00:10:17.635304 | orchestrator | 2025-07-28 00:10:17.635316 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-28 00:10:18.713800 | orchestrator | changed: [testbed-manager] 2025-07-28 00:10:18.713842 | orchestrator | 2025-07-28 00:10:18.713851 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-28 00:10:18.756834 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:10:18.756874 | orchestrator | 2025-07-28 00:10:18.756883 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-28 00:10:21.891883 | orchestrator | changed: [testbed-manager] 2025-07-28 00:10:21.891936 | orchestrator | 2025-07-28 00:10:21.891950 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-28 00:10:21.936903 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:10:21.936946 | orchestrator | 2025-07-28 00:10:21.936954 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-28 00:11:59.298154 | orchestrator | changed: [testbed-manager] 2025-07-28 00:11:59.298279 | orchestrator | 2025-07-28 00:11:59.298298 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-28 00:12:00.431760 | orchestrator | ok: [testbed-manager] 2025-07-28 00:12:00.431879 | orchestrator | 2025-07-28 00:12:00.431897 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-28 00:12:00.431912 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-28 00:12:00.431924 | orchestrator | 2025-07-28 00:12:00.649662 | orchestrator | ok: Runtime: 0:02:21.748219 2025-07-28 00:12:00.667662 | 2025-07-28 00:12:00.667802 | TASK [Reboot manager] 2025-07-28 00:12:02.205487 | orchestrator | ok: Runtime: 0:00:01.018537 2025-07-28 00:12:02.222647 | 2025-07-28 00:12:02.222801 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-28 00:12:16.701562 | orchestrator | ok 2025-07-28 00:12:16.713047 | 2025-07-28 00:12:16.713189 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-28 00:13:16.755025 | orchestrator | ok 2025-07-28 00:13:16.763981 | 2025-07-28 00:13:16.764106 | TASK [Deploy manager + bootstrap nodes] 2025-07-28 00:13:19.557925 | orchestrator | 2025-07-28 00:13:19.558198 | orchestrator | # DEPLOY MANAGER 2025-07-28 00:13:19.558228 | orchestrator | 2025-07-28 00:13:19.558244 | orchestrator | + set -e 2025-07-28 00:13:19.558258 | orchestrator | + echo 2025-07-28 00:13:19.558273 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-28 00:13:19.558291 | orchestrator | + echo 2025-07-28 00:13:19.558342 | orchestrator | + cat /opt/manager-vars.sh 2025-07-28 00:13:19.561354 | orchestrator | export NUMBER_OF_NODES=6 2025-07-28 00:13:19.561380 | orchestrator | 2025-07-28 00:13:19.561393 | orchestrator | export CEPH_VERSION=reef 2025-07-28 00:13:19.561406 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-28 00:13:19.561419 | orchestrator | export MANAGER_VERSION=latest 2025-07-28 00:13:19.561442 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-28 00:13:19.561453 | orchestrator | 2025-07-28 00:13:19.561471 | orchestrator | export ARA=false 2025-07-28 00:13:19.561483 | orchestrator | export DEPLOY_MODE=manager 2025-07-28 00:13:19.561501 | orchestrator | export TEMPEST=true 2025-07-28 00:13:19.561512 | orchestrator | export IS_ZUUL=true 2025-07-28 00:13:19.561523 | orchestrator | 2025-07-28 00:13:19.561541 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2025-07-28 00:13:19.561553 | orchestrator | export EXTERNAL_API=false 2025-07-28 00:13:19.561564 | orchestrator | 2025-07-28 00:13:19.561575 | orchestrator | export IMAGE_USER=ubuntu 2025-07-28 00:13:19.561589 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-28 00:13:19.561600 | orchestrator | 2025-07-28 00:13:19.561610 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-28 00:13:19.561627 | orchestrator | 2025-07-28 00:13:19.561639 | orchestrator | + echo 2025-07-28 00:13:19.561651 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-28 00:13:19.562630 | orchestrator | ++ export INTERACTIVE=false 2025-07-28 00:13:19.562650 | orchestrator | ++ INTERACTIVE=false 2025-07-28 00:13:19.562664 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-28 00:13:19.562676 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-28 00:13:19.562849 | orchestrator | + source /opt/manager-vars.sh 2025-07-28 00:13:19.562865 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-28 00:13:19.562876 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-28 00:13:19.562887 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-28 00:13:19.562898 | orchestrator | ++ CEPH_VERSION=reef 2025-07-28 00:13:19.562909 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-28 00:13:19.562920 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-28 00:13:19.562935 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-28 00:13:19.562946 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-28 00:13:19.562957 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-28 00:13:19.562976 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-28 00:13:19.562988 | orchestrator | ++ export ARA=false 2025-07-28 00:13:19.562999 | orchestrator | ++ ARA=false 2025-07-28 00:13:19.563010 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-28 00:13:19.563021 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-28 00:13:19.563032 | orchestrator | ++ export TEMPEST=true 2025-07-28 00:13:19.563042 | orchestrator | ++ TEMPEST=true 2025-07-28 00:13:19.563053 | orchestrator | ++ export IS_ZUUL=true 2025-07-28 00:13:19.563064 | orchestrator | ++ IS_ZUUL=true 2025-07-28 00:13:19.563075 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2025-07-28 00:13:19.563086 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2025-07-28 00:13:19.563097 | orchestrator | ++ export EXTERNAL_API=false 2025-07-28 00:13:19.563107 | orchestrator | ++ EXTERNAL_API=false 2025-07-28 00:13:19.563118 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-28 00:13:19.563169 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-28 00:13:19.563189 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-28 00:13:19.563208 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-28 00:13:19.563220 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-28 00:13:19.563231 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-28 00:13:19.563242 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-28 00:13:19.624385 | orchestrator | + docker version 2025-07-28 00:13:19.907175 | orchestrator | Client: Docker Engine - Community 2025-07-28 00:13:19.907277 | orchestrator | Version: 27.5.1 2025-07-28 00:13:19.907294 | orchestrator | API version: 1.47 2025-07-28 00:13:19.907310 | orchestrator | Go version: go1.22.11 2025-07-28 00:13:19.907322 | orchestrator | Git commit: 9f9e405 2025-07-28 00:13:19.907336 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-28 00:13:19.907353 | orchestrator | OS/Arch: linux/amd64 2025-07-28 00:13:19.907375 | orchestrator | Context: default 2025-07-28 00:13:19.907403 | orchestrator | 2025-07-28 00:13:19.907423 | orchestrator | Server: Docker Engine - Community 2025-07-28 00:13:19.907442 | orchestrator | Engine: 2025-07-28 00:13:19.907460 | orchestrator | Version: 27.5.1 2025-07-28 00:13:19.907479 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-28 00:13:19.907536 | orchestrator | Go version: go1.22.11 2025-07-28 00:13:19.907558 | orchestrator | Git commit: 4c9b3b0 2025-07-28 00:13:19.907577 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-28 00:13:19.907596 | orchestrator | OS/Arch: linux/amd64 2025-07-28 00:13:19.907607 | orchestrator | Experimental: false 2025-07-28 00:13:19.907618 | orchestrator | containerd: 2025-07-28 00:13:19.907628 | orchestrator | Version: 1.7.27 2025-07-28 00:13:19.907640 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-28 00:13:19.907651 | orchestrator | runc: 2025-07-28 00:13:19.907662 | orchestrator | Version: 1.2.5 2025-07-28 00:13:19.907673 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-28 00:13:19.907684 | orchestrator | docker-init: 2025-07-28 00:13:19.907694 | orchestrator | Version: 0.19.0 2025-07-28 00:13:19.907706 | orchestrator | GitCommit: de40ad0 2025-07-28 00:13:19.910965 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-28 00:13:19.919365 | orchestrator | + set -e 2025-07-28 00:13:19.919409 | orchestrator | + source /opt/manager-vars.sh 2025-07-28 00:13:19.919429 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-28 00:13:19.919442 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-28 00:13:19.919453 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-28 00:13:19.919464 | orchestrator | ++ CEPH_VERSION=reef 2025-07-28 00:13:19.919475 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-28 00:13:19.919486 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-28 00:13:19.919497 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-28 00:13:19.919508 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-28 00:13:19.919519 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-28 00:13:19.919529 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-28 00:13:19.919540 | orchestrator | ++ export ARA=false 2025-07-28 00:13:19.919551 | orchestrator | ++ ARA=false 2025-07-28 00:13:19.919561 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-28 00:13:19.919573 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-28 00:13:19.919583 | orchestrator | ++ export TEMPEST=true 2025-07-28 00:13:19.919594 | orchestrator | ++ TEMPEST=true 2025-07-28 00:13:19.919604 | orchestrator | ++ export IS_ZUUL=true 2025-07-28 00:13:19.919615 | orchestrator | ++ IS_ZUUL=true 2025-07-28 00:13:19.919626 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2025-07-28 00:13:19.919637 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.61 2025-07-28 00:13:19.919647 | orchestrator | ++ export EXTERNAL_API=false 2025-07-28 00:13:19.919658 | orchestrator | ++ EXTERNAL_API=false 2025-07-28 00:13:19.919668 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-28 00:13:19.919679 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-28 00:13:19.919690 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-28 00:13:19.919700 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-28 00:13:19.919711 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-28 00:13:19.919722 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-28 00:13:19.919733 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-28 00:13:19.919744 | orchestrator | ++ export INTERACTIVE=false 2025-07-28 00:13:19.919754 | orchestrator | ++ INTERACTIVE=false 2025-07-28 00:13:19.919765 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-28 00:13:19.919779 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-28 00:13:19.919797 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-28 00:13:19.919808 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-28 00:13:19.919819 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-28 00:13:19.927252 | orchestrator | + set -e 2025-07-28 00:13:19.927988 | orchestrator | + VERSION=reef 2025-07-28 00:13:19.928696 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-28 00:13:19.936504 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-28 00:13:19.936569 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-28 00:13:19.944619 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-28 00:13:19.950837 | orchestrator | + set -e 2025-07-28 00:13:19.950918 | orchestrator | + VERSION=2024.2 2025-07-28 00:13:19.951700 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-28 00:13:19.955473 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-28 00:13:19.955529 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-28 00:13:19.963105 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-28 00:13:19.963696 | orchestrator | ++ semver latest 7.0.0 2025-07-28 00:13:20.041903 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-28 00:13:20.042014 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-28 00:13:20.042096 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-28 00:13:20.042119 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-28 00:13:20.148365 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-28 00:13:20.149466 | orchestrator | + source /opt/venv/bin/activate 2025-07-28 00:13:20.150708 | orchestrator | ++ deactivate nondestructive 2025-07-28 00:13:20.150728 | orchestrator | ++ '[' -n '' ']' 2025-07-28 00:13:20.150739 | orchestrator | ++ '[' -n '' ']' 2025-07-28 00:13:20.150750 | orchestrator | ++ hash -r 2025-07-28 00:13:20.150764 | orchestrator | ++ '[' -n '' ']' 2025-07-28 00:13:20.150773 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-28 00:13:20.150852 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-28 00:13:20.150866 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-28 00:13:20.151028 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-28 00:13:20.151042 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-28 00:13:20.151052 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-28 00:13:20.151061 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-28 00:13:20.151243 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-28 00:13:20.151257 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-28 00:13:20.151267 | orchestrator | ++ export PATH 2025-07-28 00:13:20.151279 | orchestrator | ++ '[' -n '' ']' 2025-07-28 00:13:20.151291 | orchestrator | ++ '[' -z '' ']' 2025-07-28 00:13:20.151429 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-28 00:13:20.151470 | orchestrator | ++ PS1='(venv) ' 2025-07-28 00:13:20.151481 | orchestrator | ++ export PS1 2025-07-28 00:13:20.151490 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-28 00:13:20.151498 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-28 00:13:20.151510 | orchestrator | ++ hash -r 2025-07-28 00:13:20.151729 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-28 00:13:21.607710 | orchestrator | 2025-07-28 00:13:21.607821 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-28 00:13:21.607838 | orchestrator | 2025-07-28 00:13:21.607850 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-28 00:13:22.194775 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:22.194907 | orchestrator | 2025-07-28 00:13:22.194933 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-28 00:13:23.238168 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:23.238275 | orchestrator | 2025-07-28 00:13:23.238292 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-28 00:13:23.238305 | orchestrator | 2025-07-28 00:13:23.238316 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:13:25.731276 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:25.731386 | orchestrator | 2025-07-28 00:13:25.731402 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-28 00:13:25.787665 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:25.787757 | orchestrator | 2025-07-28 00:13:25.787774 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-28 00:13:26.254881 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:26.254996 | orchestrator | 2025-07-28 00:13:26.255023 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-28 00:13:26.298642 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:13:26.298740 | orchestrator | 2025-07-28 00:13:26.298756 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-28 00:13:26.665043 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:26.665205 | orchestrator | 2025-07-28 00:13:26.665228 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-28 00:13:26.718154 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:13:26.718263 | orchestrator | 2025-07-28 00:13:26.718279 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-28 00:13:27.062200 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:27.062305 | orchestrator | 2025-07-28 00:13:27.062322 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-28 00:13:27.199346 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:13:27.199456 | orchestrator | 2025-07-28 00:13:27.199474 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-28 00:13:27.199487 | orchestrator | 2025-07-28 00:13:27.200246 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:13:28.992735 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:28.992839 | orchestrator | 2025-07-28 00:13:28.992856 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-28 00:13:29.104981 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-28 00:13:29.105072 | orchestrator | 2025-07-28 00:13:29.105087 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-28 00:13:29.161827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-28 00:13:29.161905 | orchestrator | 2025-07-28 00:13:29.161918 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-28 00:13:30.271798 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-28 00:13:30.271901 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-28 00:13:30.271916 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-28 00:13:30.271928 | orchestrator | 2025-07-28 00:13:30.271940 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-28 00:13:32.153338 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-28 00:13:32.153415 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-28 00:13:32.153426 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-28 00:13:32.153433 | orchestrator | 2025-07-28 00:13:32.153441 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-28 00:13:32.834323 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-28 00:13:32.834433 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:32.834450 | orchestrator | 2025-07-28 00:13:32.834463 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-28 00:13:33.521633 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-28 00:13:33.521726 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:33.521741 | orchestrator | 2025-07-28 00:13:33.521754 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-28 00:13:33.574436 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:13:33.574523 | orchestrator | 2025-07-28 00:13:33.574533 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-28 00:13:33.938958 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:33.939067 | orchestrator | 2025-07-28 00:13:33.939093 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-28 00:13:33.999511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-28 00:13:33.999608 | orchestrator | 2025-07-28 00:13:33.999623 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-28 00:13:35.063548 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:35.063624 | orchestrator | 2025-07-28 00:13:35.063631 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-28 00:13:35.951079 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:35.951244 | orchestrator | 2025-07-28 00:13:35.951260 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-28 00:13:48.451232 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:48.451374 | orchestrator | 2025-07-28 00:13:48.451394 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-28 00:13:48.501752 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:13:48.501847 | orchestrator | 2025-07-28 00:13:48.501864 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-28 00:13:48.501895 | orchestrator | 2025-07-28 00:13:48.501919 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:13:50.504061 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:50.504200 | orchestrator | 2025-07-28 00:13:50.504247 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-28 00:13:50.607718 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-28 00:13:50.607816 | orchestrator | 2025-07-28 00:13:50.607830 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-28 00:13:50.671756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-28 00:13:50.671851 | orchestrator | 2025-07-28 00:13:50.671866 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-28 00:13:53.312665 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:53.312779 | orchestrator | 2025-07-28 00:13:53.312797 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-28 00:13:53.364978 | orchestrator | ok: [testbed-manager] 2025-07-28 00:13:53.365080 | orchestrator | 2025-07-28 00:13:53.365096 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-28 00:13:53.486447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-28 00:13:53.486566 | orchestrator | 2025-07-28 00:13:53.486582 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-28 00:13:56.439153 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-28 00:13:56.439285 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-28 00:13:56.439311 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-28 00:13:56.439332 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-28 00:13:56.439351 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-28 00:13:56.439369 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-28 00:13:56.439385 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-28 00:13:56.439396 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-28 00:13:56.439407 | orchestrator | 2025-07-28 00:13:56.439419 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-28 00:13:57.097888 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:57.097994 | orchestrator | 2025-07-28 00:13:57.098083 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-28 00:13:57.753025 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:57.753153 | orchestrator | 2025-07-28 00:13:57.753170 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-28 00:13:57.835446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-28 00:13:57.835526 | orchestrator | 2025-07-28 00:13:57.835541 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-28 00:13:59.039266 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-28 00:13:59.039359 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-28 00:13:59.039373 | orchestrator | 2025-07-28 00:13:59.039385 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-28 00:13:59.657605 | orchestrator | changed: [testbed-manager] 2025-07-28 00:13:59.657922 | orchestrator | 2025-07-28 00:13:59.657945 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-28 00:13:59.718757 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:13:59.718857 | orchestrator | 2025-07-28 00:13:59.718882 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-28 00:13:59.793204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-28 00:13:59.793294 | orchestrator | 2025-07-28 00:13:59.793312 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-28 00:14:01.162529 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-28 00:14:01.162593 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-28 00:14:01.162602 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:01.162611 | orchestrator | 2025-07-28 00:14:01.162619 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-28 00:14:01.803669 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:01.803755 | orchestrator | 2025-07-28 00:14:01.803765 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-28 00:14:01.855296 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:14:01.855377 | orchestrator | 2025-07-28 00:14:01.855388 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-28 00:14:01.958738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-28 00:14:01.958819 | orchestrator | 2025-07-28 00:14:01.958829 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-28 00:14:02.501286 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:02.501391 | orchestrator | 2025-07-28 00:14:02.501407 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-28 00:14:02.919330 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:02.919417 | orchestrator | 2025-07-28 00:14:02.919428 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-28 00:14:04.183699 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-28 00:14:04.183810 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-28 00:14:04.183825 | orchestrator | 2025-07-28 00:14:04.183837 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-28 00:14:04.847465 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:04.847609 | orchestrator | 2025-07-28 00:14:04.847627 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-28 00:14:05.311725 | orchestrator | ok: [testbed-manager] 2025-07-28 00:14:05.312905 | orchestrator | 2025-07-28 00:14:05.312949 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-28 00:14:05.671530 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:05.671605 | orchestrator | 2025-07-28 00:14:05.671612 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-28 00:14:05.712918 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:14:05.713023 | orchestrator | 2025-07-28 00:14:05.713039 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-28 00:14:05.782658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-28 00:14:05.782773 | orchestrator | 2025-07-28 00:14:05.782789 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-28 00:14:05.840622 | orchestrator | ok: [testbed-manager] 2025-07-28 00:14:05.840703 | orchestrator | 2025-07-28 00:14:05.840719 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-28 00:14:07.917631 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-28 00:14:07.917777 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-28 00:14:07.917793 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-28 00:14:07.917805 | orchestrator | 2025-07-28 00:14:07.917818 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-28 00:14:08.631775 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:08.631889 | orchestrator | 2025-07-28 00:14:08.631905 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-28 00:14:09.357901 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:09.358008 | orchestrator | 2025-07-28 00:14:09.358074 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-28 00:14:10.026891 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:10.027003 | orchestrator | 2025-07-28 00:14:10.027022 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-28 00:14:10.093812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-28 00:14:10.093912 | orchestrator | 2025-07-28 00:14:10.093926 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-28 00:14:10.143323 | orchestrator | ok: [testbed-manager] 2025-07-28 00:14:10.143421 | orchestrator | 2025-07-28 00:14:10.143435 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-28 00:14:10.870398 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-28 00:14:10.870514 | orchestrator | 2025-07-28 00:14:10.870529 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-28 00:14:10.947142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-28 00:14:10.947277 | orchestrator | 2025-07-28 00:14:10.947307 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-28 00:14:11.667576 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:11.667709 | orchestrator | 2025-07-28 00:14:11.667738 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-28 00:14:12.345002 | orchestrator | ok: [testbed-manager] 2025-07-28 00:14:12.345148 | orchestrator | 2025-07-28 00:14:12.345167 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-28 00:14:12.395680 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:14:12.395771 | orchestrator | 2025-07-28 00:14:12.395786 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-28 00:14:12.456199 | orchestrator | ok: [testbed-manager] 2025-07-28 00:14:12.456320 | orchestrator | 2025-07-28 00:14:12.456336 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-28 00:14:13.342935 | orchestrator | changed: [testbed-manager] 2025-07-28 00:14:13.343054 | orchestrator | 2025-07-28 00:14:13.343083 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-28 00:15:20.329821 | orchestrator | changed: [testbed-manager] 2025-07-28 00:15:20.329936 | orchestrator | 2025-07-28 00:15:20.329953 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-28 00:15:21.349763 | orchestrator | ok: [testbed-manager] 2025-07-28 00:15:21.349893 | orchestrator | 2025-07-28 00:15:21.349919 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-28 00:15:21.402603 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:15:21.402690 | orchestrator | 2025-07-28 00:15:21.402705 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-28 00:15:24.211400 | orchestrator | changed: [testbed-manager] 2025-07-28 00:15:24.211575 | orchestrator | 2025-07-28 00:15:24.211595 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-28 00:15:24.266683 | orchestrator | ok: [testbed-manager] 2025-07-28 00:15:24.266782 | orchestrator | 2025-07-28 00:15:24.266799 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-28 00:15:24.266812 | orchestrator | 2025-07-28 00:15:24.266823 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-28 00:15:24.320911 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:15:24.321005 | orchestrator | 2025-07-28 00:15:24.321019 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-28 00:16:24.404132 | orchestrator | Pausing for 60 seconds 2025-07-28 00:16:24.404263 | orchestrator | changed: [testbed-manager] 2025-07-28 00:16:24.404283 | orchestrator | 2025-07-28 00:16:24.404297 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-28 00:16:29.093016 | orchestrator | changed: [testbed-manager] 2025-07-28 00:16:29.093189 | orchestrator | 2025-07-28 00:16:29.093208 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-28 00:17:10.875447 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-28 00:17:10.875600 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-28 00:17:10.875617 | orchestrator | changed: [testbed-manager] 2025-07-28 00:17:10.875630 | orchestrator | 2025-07-28 00:17:10.875643 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-28 00:17:20.866442 | orchestrator | changed: [testbed-manager] 2025-07-28 00:17:20.866560 | orchestrator | 2025-07-28 00:17:20.866577 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-28 00:17:20.957544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-28 00:17:20.957683 | orchestrator | 2025-07-28 00:17:20.957699 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-28 00:17:20.957711 | orchestrator | 2025-07-28 00:17:20.957723 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-28 00:17:21.015189 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:17:21.015287 | orchestrator | 2025-07-28 00:17:21.015302 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-28 00:17:21.015315 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-28 00:17:21.015327 | orchestrator | 2025-07-28 00:17:21.173813 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-28 00:17:21.173919 | orchestrator | + deactivate 2025-07-28 00:17:21.173942 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-28 00:17:21.173962 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-28 00:17:21.173990 | orchestrator | + export PATH 2025-07-28 00:17:21.174120 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-28 00:17:21.174139 | orchestrator | + '[' -n '' ']' 2025-07-28 00:17:21.174150 | orchestrator | + hash -r 2025-07-28 00:17:21.174161 | orchestrator | + '[' -n '' ']' 2025-07-28 00:17:21.174171 | orchestrator | + unset VIRTUAL_ENV 2025-07-28 00:17:21.174182 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-28 00:17:21.174213 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-28 00:17:21.174225 | orchestrator | + unset -f deactivate 2025-07-28 00:17:21.174237 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-28 00:17:21.180845 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-28 00:17:21.180878 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-28 00:17:21.180891 | orchestrator | + local max_attempts=60 2025-07-28 00:17:21.180903 | orchestrator | + local name=ceph-ansible 2025-07-28 00:17:21.180914 | orchestrator | + local attempt_num=1 2025-07-28 00:17:21.182328 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-28 00:17:21.232619 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-28 00:17:21.232686 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-28 00:17:21.232694 | orchestrator | + local max_attempts=60 2025-07-28 00:17:21.232701 | orchestrator | + local name=kolla-ansible 2025-07-28 00:17:21.232710 | orchestrator | + local attempt_num=1 2025-07-28 00:17:21.233310 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-28 00:17:21.273550 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-28 00:17:21.273618 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-28 00:17:21.273627 | orchestrator | + local max_attempts=60 2025-07-28 00:17:21.273635 | orchestrator | + local name=osism-ansible 2025-07-28 00:17:21.273643 | orchestrator | + local attempt_num=1 2025-07-28 00:17:21.274148 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-28 00:17:21.310195 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-28 00:17:21.310282 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-28 00:17:21.310295 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-28 00:17:22.069210 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-28 00:17:22.269273 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-28 00:17:22.269359 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269369 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269376 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-28 00:17:22.269385 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-28 00:17:22.269412 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269419 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269425 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-28 00:17:22.269431 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269437 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-28 00:17:22.269443 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269449 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-28 00:17:22.269456 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269462 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.269468 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-28 00:17:22.280195 | orchestrator | ++ semver latest 7.0.0 2025-07-28 00:17:22.339509 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-28 00:17:22.339596 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-28 00:17:22.339612 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-28 00:17:22.344361 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-28 00:17:34.457200 | orchestrator | 2025-07-28 00:17:34 | INFO  | Task 8a7a7729-eafd-4d0b-826b-fb00fc6456ed (resolvconf) was prepared for execution. 2025-07-28 00:17:34.457337 | orchestrator | 2025-07-28 00:17:34 | INFO  | It takes a moment until task 8a7a7729-eafd-4d0b-826b-fb00fc6456ed (resolvconf) has been started and output is visible here. 2025-07-28 00:17:52.365499 | orchestrator | 2025-07-28 00:17:52.365594 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-28 00:17:52.365606 | orchestrator | 2025-07-28 00:17:52.365615 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-28 00:17:52.365624 | orchestrator | Monday 28 July 2025 00:17:40 +0000 (0:00:00.107) 0:00:00.107 *********** 2025-07-28 00:17:52.365632 | orchestrator | ok: [testbed-manager] 2025-07-28 00:17:52.365640 | orchestrator | 2025-07-28 00:17:52.365647 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-28 00:17:52.365655 | orchestrator | Monday 28 July 2025 00:17:44 +0000 (0:00:03.810) 0:00:03.918 *********** 2025-07-28 00:17:52.365663 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:17:52.365671 | orchestrator | 2025-07-28 00:17:52.365681 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-28 00:17:52.365689 | orchestrator | Monday 28 July 2025 00:17:44 +0000 (0:00:00.066) 0:00:03.985 *********** 2025-07-28 00:17:52.365711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-28 00:17:52.365720 | orchestrator | 2025-07-28 00:17:52.365727 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-28 00:17:52.365735 | orchestrator | Monday 28 July 2025 00:17:44 +0000 (0:00:00.092) 0:00:04.077 *********** 2025-07-28 00:17:52.365742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-28 00:17:52.365749 | orchestrator | 2025-07-28 00:17:52.365756 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-28 00:17:52.365763 | orchestrator | Monday 28 July 2025 00:17:44 +0000 (0:00:00.092) 0:00:04.169 *********** 2025-07-28 00:17:52.365770 | orchestrator | ok: [testbed-manager] 2025-07-28 00:17:52.365777 | orchestrator | 2025-07-28 00:17:52.365784 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-28 00:17:52.365791 | orchestrator | Monday 28 July 2025 00:17:45 +0000 (0:00:01.603) 0:00:05.772 *********** 2025-07-28 00:17:52.365798 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:17:52.365806 | orchestrator | 2025-07-28 00:17:52.365813 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-28 00:17:52.365820 | orchestrator | Monday 28 July 2025 00:17:45 +0000 (0:00:00.070) 0:00:05.843 *********** 2025-07-28 00:17:52.365827 | orchestrator | ok: [testbed-manager] 2025-07-28 00:17:52.365834 | orchestrator | 2025-07-28 00:17:52.365841 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-28 00:17:52.365848 | orchestrator | Monday 28 July 2025 00:17:46 +0000 (0:00:00.745) 0:00:06.588 *********** 2025-07-28 00:17:52.365855 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:17:52.365862 | orchestrator | 2025-07-28 00:17:52.365870 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-28 00:17:52.365878 | orchestrator | Monday 28 July 2025 00:17:46 +0000 (0:00:00.081) 0:00:06.670 *********** 2025-07-28 00:17:52.365885 | orchestrator | changed: [testbed-manager] 2025-07-28 00:17:52.365892 | orchestrator | 2025-07-28 00:17:52.365899 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-28 00:17:52.365906 | orchestrator | Monday 28 July 2025 00:17:47 +0000 (0:00:00.963) 0:00:07.633 *********** 2025-07-28 00:17:52.365913 | orchestrator | changed: [testbed-manager] 2025-07-28 00:17:52.365920 | orchestrator | 2025-07-28 00:17:52.365927 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-28 00:17:52.365934 | orchestrator | Monday 28 July 2025 00:17:49 +0000 (0:00:01.648) 0:00:09.282 *********** 2025-07-28 00:17:52.365941 | orchestrator | ok: [testbed-manager] 2025-07-28 00:17:52.365948 | orchestrator | 2025-07-28 00:17:52.365955 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-28 00:17:52.365962 | orchestrator | Monday 28 July 2025 00:17:50 +0000 (0:00:01.220) 0:00:10.502 *********** 2025-07-28 00:17:52.365970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-28 00:17:52.365977 | orchestrator | 2025-07-28 00:17:52.365988 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-28 00:17:52.366091 | orchestrator | Monday 28 July 2025 00:17:50 +0000 (0:00:00.079) 0:00:10.582 *********** 2025-07-28 00:17:52.366102 | orchestrator | changed: [testbed-manager] 2025-07-28 00:17:52.366110 | orchestrator | 2025-07-28 00:17:52.366119 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-28 00:17:52.366129 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-28 00:17:52.366137 | orchestrator | 2025-07-28 00:17:52.366145 | orchestrator | 2025-07-28 00:17:52.366153 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-28 00:17:52.366169 | orchestrator | Monday 28 July 2025 00:17:51 +0000 (0:00:01.326) 0:00:11.909 *********** 2025-07-28 00:17:52.366177 | orchestrator | =============================================================================== 2025-07-28 00:17:52.366185 | orchestrator | Gathering Facts --------------------------------------------------------- 3.81s 2025-07-28 00:17:52.366194 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.65s 2025-07-28 00:17:52.366202 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.60s 2025-07-28 00:17:52.366210 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.33s 2025-07-28 00:17:52.366218 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.22s 2025-07-28 00:17:52.366227 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.96s 2025-07-28 00:17:52.366250 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.75s 2025-07-28 00:17:52.366259 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-07-28 00:17:52.366267 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-07-28 00:17:52.366275 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-07-28 00:17:52.366284 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-07-28 00:17:52.366292 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-07-28 00:17:52.366300 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-28 00:17:52.571702 | orchestrator | + osism apply sshconfig 2025-07-28 00:18:04.378855 | orchestrator | 2025-07-28 00:18:04 | INFO  | Task c1add5b4-1349-45e6-a06d-510d31ae36dd (sshconfig) was prepared for execution. 2025-07-28 00:18:04.379016 | orchestrator | 2025-07-28 00:18:04 | INFO  | It takes a moment until task c1add5b4-1349-45e6-a06d-510d31ae36dd (sshconfig) has been started and output is visible here. 2025-07-28 00:18:20.559460 | orchestrator | 2025-07-28 00:18:20.559606 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-28 00:18:20.559632 | orchestrator | 2025-07-28 00:18:20.559651 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-28 00:18:20.559670 | orchestrator | Monday 28 July 2025 00:18:10 +0000 (0:00:00.108) 0:00:00.108 *********** 2025-07-28 00:18:20.559688 | orchestrator | ok: [testbed-manager] 2025-07-28 00:18:20.559708 | orchestrator | 2025-07-28 00:18:20.559725 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-28 00:18:20.559743 | orchestrator | Monday 28 July 2025 00:18:10 +0000 (0:00:00.634) 0:00:00.742 *********** 2025-07-28 00:18:20.559760 | orchestrator | changed: [testbed-manager] 2025-07-28 00:18:20.559779 | orchestrator | 2025-07-28 00:18:20.559798 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-28 00:18:20.559816 | orchestrator | Monday 28 July 2025 00:18:11 +0000 (0:00:00.781) 0:00:01.524 *********** 2025-07-28 00:18:20.559834 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-28 00:18:20.559851 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-28 00:18:20.559868 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-28 00:18:20.559886 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-28 00:18:20.559904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-28 00:18:20.559922 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-28 00:18:20.559999 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-28 00:18:20.560024 | orchestrator | 2025-07-28 00:18:20.560045 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-28 00:18:20.560067 | orchestrator | Monday 28 July 2025 00:18:19 +0000 (0:00:07.510) 0:00:09.034 *********** 2025-07-28 00:18:20.560124 | orchestrator | skipping: [testbed-manager] 2025-07-28 00:18:20.560146 | orchestrator | 2025-07-28 00:18:20.560167 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-28 00:18:20.560186 | orchestrator | Monday 28 July 2025 00:18:19 +0000 (0:00:00.054) 0:00:09.089 *********** 2025-07-28 00:18:20.560205 | orchestrator | changed: [testbed-manager] 2025-07-28 00:18:20.560225 | orchestrator | 2025-07-28 00:18:20.560244 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-28 00:18:20.560265 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-28 00:18:20.560318 | orchestrator | 2025-07-28 00:18:20.560339 | orchestrator | 2025-07-28 00:18:20.560356 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-28 00:18:20.560372 | orchestrator | Monday 28 July 2025 00:18:19 +0000 (0:00:00.767) 0:00:09.857 *********** 2025-07-28 00:18:20.560390 | orchestrator | =============================================================================== 2025-07-28 00:18:20.560407 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 7.51s 2025-07-28 00:18:20.560424 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.78s 2025-07-28 00:18:20.560440 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.77s 2025-07-28 00:18:20.560457 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.63s 2025-07-28 00:18:20.560474 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2025-07-28 00:18:20.848937 | orchestrator | + osism apply known-hosts 2025-07-28 00:18:32.782395 | orchestrator | 2025-07-28 00:18:32 | INFO  | Task 2ef59371-c019-4734-95b4-f71cc1424a2c (known-hosts) was prepared for execution. 2025-07-28 00:18:32.782531 | orchestrator | 2025-07-28 00:18:32 | INFO  | It takes a moment until task 2ef59371-c019-4734-95b4-f71cc1424a2c (known-hosts) has been started and output is visible here. 2025-07-28 00:18:46.840022 | orchestrator | 2025-07-28 00:18:46 | INFO  | Task 560e7556-0d5a-4f70-abde-7ade1088ac47 (known-hosts) was prepared for execution. 2025-07-28 00:18:46.840146 | orchestrator | 2025-07-28 00:18:46 | INFO  | It takes a moment until task 560e7556-0d5a-4f70-abde-7ade1088ac47 (known-hosts) has been started and output is visible here. 2025-07-28 00:18:59.374013 | orchestrator | 2025-07-28 00:18:59.374187 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-28 00:18:59.374203 | orchestrator | 2025-07-28 00:18:59.374215 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-28 00:18:59.374228 | orchestrator | Monday 28 July 2025 00:18:38 +0000 (0:00:00.153) 0:00:00.153 *********** 2025-07-28 00:18:59.374240 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-28 00:18:59.374252 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-28 00:18:59.374263 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-28 00:18:59.374274 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-28 00:18:59.374284 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-28 00:18:59.374295 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-28 00:18:59.374306 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-28 00:18:59.374316 | orchestrator | 2025-07-28 00:18:59.374327 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-28 00:18:59.374339 | orchestrator | Monday 28 July 2025 00:18:46 +0000 (0:00:07.238) 0:00:07.392 *********** 2025-07-28 00:18:59.374351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-28 00:18:59.374364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-28 00:18:59.374396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-28 00:18:59.374418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-28 00:18:59.374429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-28 00:18:59.374440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-28 00:18:59.374451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-28 00:18:59.374462 | orchestrator | 2025-07-28 00:18:59.374473 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-28 00:18:59.374486 | orchestrator | Monday 28 July 2025 00:18:46 +0000 (0:00:00.188) 0:00:07.581 *********** 2025-07-28 00:18:59.374499 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-28 00:18:59.374513 | orchestrator |  2025-07-28 00:18:59.374526 | orchestrator | Task failed. 2025-07-28 00:18:59.374539 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-28 00:18:59.374552 | orchestrator |  2025-07-28 00:18:59.374564 | orchestrator | 1 --- 2025-07-28 00:18:59.374577 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-28 00:18:59.374590 | orchestrator |  ^ column 3 2025-07-28 00:18:59.374602 | orchestrator |  2025-07-28 00:18:59.374614 | orchestrator | <<< caused by >>> 2025-07-28 00:18:59.374626 | orchestrator |  2025-07-28 00:18:59.374639 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-28 00:18:59.374651 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-28 00:18:59.374661 | orchestrator |  2025-07-28 00:18:59.374672 | orchestrator | 10 when: 2025-07-28 00:18:59.374683 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-28 00:18:59.374694 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-28 00:18:59.374705 | orchestrator |  ^ column 7 2025-07-28 00:18:59.374716 | orchestrator |  2025-07-28 00:18:59.374727 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-28 00:18:59.374738 | orchestrator |  2025-07-28 00:18:59.374749 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNlvHRqXfoZLeuB/kDTj+J5aJ7NWS8vVAejG7O+oJZlviuR2h19ACJd+uDIeylxXzeVz0KHmJKyDWRELbH71PX0=) => changed=false  2025-07-28 00:18:59.374763 | orchestrator |  ansible_loop_var: inner_item 2025-07-28 00:18:59.374774 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNlvHRqXfoZLeuB/kDTj+J5aJ7NWS8vVAejG7O+oJZlviuR2h19ACJd+uDIeylxXzeVz0KHmJKyDWRELbH71PX0= 2025-07-28 00:18:59.374785 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-28 00:18:59.374818 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtxuWMc6M2MlWqnZwyhu8ddwrQnvwDsEYrLsReKxhAAQHgBxGaCKxl0SYyMIjmORQmRNiD6K5DDwqYtJC8iiQxUR/Ue0JeASrcEXLPXTTTPcz261C5uwGCcR6M+22z36tLanbYpqII4dDsuL66CWKP/8npXyv5fEztokip+Vrj8ZTb0K9Lys8FA5W0d/ZuNIlBzeJOdHMsVKpR84q0+CWVICyQk3m9U7si5hxchaxF+S2wMcBRNT4QzX6c+a9oUzaNitbHRsGKlu9kVTHm5SXt/i4O3kCNF/4QZXEAbG2rz1+ev5DV40yFIoziGhMbq5/k6Xlg7ZlAkfWGnpIAMdbRKrnrrDeMAX8ovlfF2uwjd6WxOyq2AgUmFr4sFxsYdmTg5Io2/IL9HXlVp7aSSTXFG0lyUM3XpE8i/ifs4VCMYhkAndvo/LaPdJPeimTFpocoImpJLBpiBoPf+hxrSS4/m6EwK9U8NKe0yKXvTGxcqetymGHoKnQJWEmtex4kVQE=) => changed=false  2025-07-28 00:18:59.374841 | orchestrator |  ansible_loop_var: inner_item 2025-07-28 00:18:59.374853 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtxuWMc6M2MlWqnZwyhu8ddwrQnvwDsEYrLsReKxhAAQHgBxGaCKxl0SYyMIjmORQmRNiD6K5DDwqYtJC8iiQxUR/Ue0JeASrcEXLPXTTTPcz261C5uwGCcR6M+22z36tLanbYpqII4dDsuL66CWKP/8npXyv5fEztokip+Vrj8ZTb0K9Lys8FA5W0d/ZuNIlBzeJOdHMsVKpR84q0+CWVICyQk3m9U7si5hxchaxF+S2wMcBRNT4QzX6c+a9oUzaNitbHRsGKlu9kVTHm5SXt/i4O3kCNF/4QZXEAbG2rz1+ev5DV40yFIoziGhMbq5/k6Xlg7ZlAkfWGnpIAMdbRKrnrrDeMAX8ovlfF2uwjd6WxOyq2AgUmFr4sFxsYdmTg5Io2/IL9HXlVp7aSSTXFG0lyUM3XpE8i/ifs4VCMYhkAndvo/LaPdJPeimTFpocoImpJLBpiBoPf+hxrSS4/m6EwK9U8NKe0yKXvTGxcqetymGHoKnQJWEmtex4kVQE= 2025-07-28 00:18:59.374865 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-28 00:18:59.374961 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEPFYVKUCVtcOrEpyYifKEgHZgVwNeRSF5fGsP/ROLrN) => changed=false  2025-07-28 00:18:59.374975 | orchestrator |  ansible_loop_var: inner_item 2025-07-28 00:18:59.374986 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEPFYVKUCVtcOrEpyYifKEgHZgVwNeRSF5fGsP/ROLrN 2025-07-28 00:18:59.374997 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-28 00:18:59.375008 | orchestrator | 2025-07-28 00:18:59.375019 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-28 00:18:59.375030 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-28 00:18:59.375041 | orchestrator | 2025-07-28 00:18:59.375052 | orchestrator | 2025-07-28 00:18:59.375063 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-28 00:18:59.375074 | orchestrator | Monday 28 July 2025 00:18:46 +0000 (0:00:00.105) 0:00:07.687 *********** 2025-07-28 00:18:59.375084 | orchestrator | =============================================================================== 2025-07-28 00:18:59.375095 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 7.24s 2025-07-28 00:18:59.375106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-07-28 00:18:59.375117 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.11s 2025-07-28 00:18:59.375127 | orchestrator | 2025-07-28 00:18:59.375138 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-28 00:18:59.375148 | orchestrator | 2025-07-28 00:18:59.375159 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-28 00:18:59.375170 | orchestrator | Monday 28 July 2025 00:18:52 +0000 (0:00:00.116) 0:00:00.116 *********** 2025-07-28 00:18:59.375181 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-28 00:18:59.375191 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-28 00:18:59.375202 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-28 00:18:59.375213 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-28 00:18:59.375223 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-28 00:18:59.375234 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-28 00:18:59.375244 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-28 00:18:59.375255 | orchestrator | 2025-07-28 00:18:59.375266 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-28 00:18:59.375277 | orchestrator | Monday 28 July 2025 00:18:59 +0000 (0:00:06.338) 0:00:06.454 *********** 2025-07-28 00:18:59.375295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-28 00:18:59.375306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-28 00:18:59.375317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-28 00:18:59.375334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-28 00:19:00.044714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-28 00:19:00.044812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-28 00:19:00.044824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-28 00:19:00.044834 | orchestrator | 2025-07-28 00:19:00.044845 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-28 00:19:00.044854 | orchestrator | Monday 28 July 2025 00:18:59 +0000 (0:00:00.187) 0:00:06.642 *********** 2025-07-28 00:19:00.044864 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-28 00:19:00.044874 | orchestrator |  2025-07-28 00:19:00.044883 | orchestrator | Task failed. 2025-07-28 00:19:00.044894 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-28 00:19:00.044903 | orchestrator |  2025-07-28 00:19:00.044913 | orchestrator | 1 --- 2025-07-28 00:19:00.044921 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-28 00:19:00.044930 | orchestrator |  ^ column 3 2025-07-28 00:19:00.044985 | orchestrator |  2025-07-28 00:19:00.044994 | orchestrator | <<< caused by >>> 2025-07-28 00:19:00.045003 | orchestrator |  2025-07-28 00:19:00.045013 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-28 00:19:00.045022 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-28 00:19:00.045030 | orchestrator |  2025-07-28 00:19:00.045039 | orchestrator | 10 when: 2025-07-28 00:19:00.045048 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-28 00:19:00.045057 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-28 00:19:00.045066 | orchestrator |  ^ column 7 2025-07-28 00:19:00.045075 | orchestrator |  2025-07-28 00:19:00.045101 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-28 00:19:00.045111 | orchestrator |  2025-07-28 00:19:00.045120 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNlvHRqXfoZLeuB/kDTj+J5aJ7NWS8vVAejG7O+oJZlviuR2h19ACJd+uDIeylxXzeVz0KHmJKyDWRELbH71PX0=) => changed=false  2025-07-28 00:19:00.045132 | orchestrator |  ansible_loop_var: inner_item 2025-07-28 00:19:00.045141 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNlvHRqXfoZLeuB/kDTj+J5aJ7NWS8vVAejG7O+oJZlviuR2h19ACJd+uDIeylxXzeVz0KHmJKyDWRELbH71PX0= 2025-07-28 00:19:00.045169 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-28 00:19:00.045181 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtxuWMc6M2MlWqnZwyhu8ddwrQnvwDsEYrLsReKxhAAQHgBxGaCKxl0SYyMIjmORQmRNiD6K5DDwqYtJC8iiQxUR/Ue0JeASrcEXLPXTTTPcz261C5uwGCcR6M+22z36tLanbYpqII4dDsuL66CWKP/8npXyv5fEztokip+Vrj8ZTb0K9Lys8FA5W0d/ZuNIlBzeJOdHMsVKpR84q0+CWVICyQk3m9U7si5hxchaxF+S2wMcBRNT4QzX6c+a9oUzaNitbHRsGKlu9kVTHm5SXt/i4O3kCNF/4QZXEAbG2rz1+ev5DV40yFIoziGhMbq5/k6Xlg7ZlAkfWGnpIAMdbRKrnrrDeMAX8ovlfF2uwjd6WxOyq2AgUmFr4sFxsYdmTg5Io2/IL9HXlVp7aSSTXFG0lyUM3XpE8i/ifs4VCMYhkAndvo/LaPdJPeimTFpocoImpJLBpiBoPf+hxrSS4/m6EwK9U8NKe0yKXvTGxcqetymGHoKnQJWEmtex4kVQE=) => changed=false  2025-07-28 00:19:00.045192 | orchestrator |  ansible_loop_var: inner_item 2025-07-28 00:19:00.045201 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtxuWMc6M2MlWqnZwyhu8ddwrQnvwDsEYrLsReKxhAAQHgBxGaCKxl0SYyMIjmORQmRNiD6K5DDwqYtJC8iiQxUR/Ue0JeASrcEXLPXTTTPcz261C5uwGCcR6M+22z36tLanbYpqII4dDsuL66CWKP/8npXyv5fEztokip+Vrj8ZTb0K9Lys8FA5W0d/ZuNIlBzeJOdHMsVKpR84q0+CWVICyQk3m9U7si5hxchaxF+S2wMcBRNT4QzX6c+a9oUzaNitbHRsGKlu9kVTHm5SXt/i4O3kCNF/4QZXEAbG2rz1+ev5DV40yFIoziGhMbq5/k6Xlg7ZlAkfWGnpIAMdbRKrnrrDeMAX8ovlfF2uwjd6WxOyq2AgUmFr4sFxsYdmTg5Io2/IL9HXlVp7aSSTXFG0lyUM3XpE8i/ifs4VCMYhkAndvo/LaPdJPeimTFpocoImpJLBpiBoPf+hxrSS4/m6EwK9U8NKe0yKXvTGxcqetymGHoKnQJWEmtex4kVQE= 2025-07-28 00:19:00.045211 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-28 00:19:00.045220 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEPFYVKUCVtcOrEpyYifKEgHZgVwNeRSF5fGsP/ROLrN) => changed=false  2025-07-28 00:19:00.045230 | orchestrator |  ansible_loop_var: inner_item 2025-07-28 00:19:00.045254 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEPFYVKUCVtcOrEpyYifKEgHZgVwNeRSF5fGsP/ROLrN 2025-07-28 00:19:00.045264 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-28 00:19:00.045275 | orchestrator | 2025-07-28 00:19:00.045285 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-28 00:19:00.045296 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-28 00:19:00.045306 | orchestrator | 2025-07-28 00:19:00.045316 | orchestrator | 2025-07-28 00:19:00.045326 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-28 00:19:00.045337 | orchestrator | Monday 28 July 2025 00:18:59 +0000 (0:00:00.122) 0:00:06.765 *********** 2025-07-28 00:19:00.045347 | orchestrator | =============================================================================== 2025-07-28 00:19:00.045357 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.34s 2025-07-28 00:19:00.045368 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-07-28 00:19:00.045378 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.12s 2025-07-28 00:19:00.547145 | orchestrator | ERROR 2025-07-28 00:19:00.547607 | orchestrator | { 2025-07-28 00:19:00.547724 | orchestrator | "delta": "0:05:42.876157", 2025-07-28 00:19:00.547800 | orchestrator | "end": "2025-07-28 00:19:00.317472", 2025-07-28 00:19:00.547863 | orchestrator | "msg": "non-zero return code", 2025-07-28 00:19:00.547919 | orchestrator | "rc": 2, 2025-07-28 00:19:00.547974 | orchestrator | "start": "2025-07-28 00:13:17.441315" 2025-07-28 00:19:00.548026 | orchestrator | } failure 2025-07-28 00:19:00.568433 | 2025-07-28 00:19:00.568913 | PLAY RECAP 2025-07-28 00:19:00.569004 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-07-28 00:19:00.569040 | 2025-07-28 00:19:00.753816 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-28 00:19:00.756402 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-28 00:19:01.543004 | 2025-07-28 00:19:01.543180 | PLAY [Post output play] 2025-07-28 00:19:01.559971 | 2025-07-28 00:19:01.560128 | LOOP [stage-output : Register sources] 2025-07-28 00:19:01.640775 | 2025-07-28 00:19:01.641064 | TASK [stage-output : Check sudo] 2025-07-28 00:19:02.899627 | orchestrator | sudo: a password is required 2025-07-28 00:19:03.180307 | orchestrator | ok: Runtime: 0:00:00.420903 2025-07-28 00:19:03.192410 | 2025-07-28 00:19:03.192584 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-28 00:19:03.233567 | 2025-07-28 00:19:03.233861 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-28 00:19:03.304692 | orchestrator | ok 2025-07-28 00:19:03.313461 | 2025-07-28 00:19:03.313619 | LOOP [stage-output : Ensure target folders exist] 2025-07-28 00:19:03.781066 | orchestrator | ok: "docs" 2025-07-28 00:19:03.781429 | 2025-07-28 00:19:04.010689 | orchestrator | ok: "artifacts" 2025-07-28 00:19:04.282325 | orchestrator | ok: "logs" 2025-07-28 00:19:04.299478 | 2025-07-28 00:19:04.299626 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-28 00:19:04.346487 | 2025-07-28 00:19:04.346806 | TASK [stage-output : Make all log files readable] 2025-07-28 00:19:04.655085 | orchestrator | ok 2025-07-28 00:19:04.666180 | 2025-07-28 00:19:04.666346 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-28 00:19:04.701027 | orchestrator | skipping: Conditional result was False 2025-07-28 00:19:04.709348 | 2025-07-28 00:19:04.709486 | TASK [stage-output : Discover log files for compression] 2025-07-28 00:19:04.733494 | orchestrator | skipping: Conditional result was False 2025-07-28 00:19:04.742622 | 2025-07-28 00:19:04.742766 | LOOP [stage-output : Archive everything from logs] 2025-07-28 00:19:04.790937 | 2025-07-28 00:19:04.791180 | PLAY [Post cleanup play] 2025-07-28 00:19:04.800752 | 2025-07-28 00:19:04.800888 | TASK [Set cloud fact (Zuul deployment)] 2025-07-28 00:19:04.852835 | orchestrator | ok 2025-07-28 00:19:04.861642 | 2025-07-28 00:19:04.861781 | TASK [Set cloud fact (local deployment)] 2025-07-28 00:19:04.895717 | orchestrator | skipping: Conditional result was False 2025-07-28 00:19:04.905831 | 2025-07-28 00:19:04.905962 | TASK [Clean the cloud environment] 2025-07-28 00:19:05.550693 | orchestrator | 2025-07-28 00:19:05 - clean up servers 2025-07-28 00:19:06.353384 | orchestrator | 2025-07-28 00:19:06 - testbed-manager 2025-07-28 00:19:06.446398 | orchestrator | 2025-07-28 00:19:06 - testbed-node-5 2025-07-28 00:19:06.553150 | orchestrator | 2025-07-28 00:19:06 - testbed-node-3 2025-07-28 00:19:06.656209 | orchestrator | 2025-07-28 00:19:06 - testbed-node-0 2025-07-28 00:19:06.749087 | orchestrator | 2025-07-28 00:19:06 - testbed-node-1 2025-07-28 00:19:06.857479 | orchestrator | 2025-07-28 00:19:06 - testbed-node-2 2025-07-28 00:19:06.966625 | orchestrator | 2025-07-28 00:19:06 - testbed-node-4 2025-07-28 00:19:07.070422 | orchestrator | 2025-07-28 00:19:07 - clean up keypairs 2025-07-28 00:19:07.091819 | orchestrator | 2025-07-28 00:19:07 - testbed 2025-07-28 00:19:07.122518 | orchestrator | 2025-07-28 00:19:07 - wait for servers to be gone 2025-07-28 00:19:18.039495 | orchestrator | 2025-07-28 00:19:18 - clean up ports 2025-07-28 00:19:18.240183 | orchestrator | 2025-07-28 00:19:18 - 2585180d-6be2-4784-b01a-d7160ec7a141 2025-07-28 00:19:18.492831 | orchestrator | 2025-07-28 00:19:18 - 33297e72-f490-4c63-9b54-9b0632a7e9a3 2025-07-28 00:19:19.326688 | orchestrator | 2025-07-28 00:19:19 - 3c372766-3e54-4b8d-87ac-101ba34c6670 2025-07-28 00:19:19.554914 | orchestrator | 2025-07-28 00:19:19 - 9753653b-b988-438a-a5c4-42693092da0b 2025-07-28 00:19:19.829952 | orchestrator | 2025-07-28 00:19:19 - a6bdb119-1253-46cc-8c80-23f29fa467bf 2025-07-28 00:19:20.205832 | orchestrator | 2025-07-28 00:19:20 - cae88ddc-74aa-4978-bf3f-80aa8d1f2b19 2025-07-28 00:19:20.742738 | orchestrator | 2025-07-28 00:19:20 - ccad39cd-65e8-4e85-87bb-4be6a2e316ce 2025-07-28 00:19:20.982412 | orchestrator | 2025-07-28 00:19:20 - clean up volumes 2025-07-28 00:19:21.109213 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-2-node-base 2025-07-28 00:19:21.148440 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-4-node-base 2025-07-28 00:19:21.197777 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-3-node-base 2025-07-28 00:19:21.249619 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-0-node-base 2025-07-28 00:19:21.291627 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-manager-base 2025-07-28 00:19:21.342695 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-1-node-base 2025-07-28 00:19:21.389836 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-5-node-base 2025-07-28 00:19:21.436555 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-1-node-4 2025-07-28 00:19:21.481268 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-7-node-4 2025-07-28 00:19:21.533599 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-2-node-5 2025-07-28 00:19:21.581793 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-3-node-3 2025-07-28 00:19:21.630783 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-0-node-3 2025-07-28 00:19:21.682371 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-4-node-4 2025-07-28 00:19:21.721882 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-5-node-5 2025-07-28 00:19:21.766595 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-6-node-3 2025-07-28 00:19:21.811954 | orchestrator | 2025-07-28 00:19:21 - testbed-volume-8-node-5 2025-07-28 00:19:21.852353 | orchestrator | 2025-07-28 00:19:21 - disconnect routers 2025-07-28 00:19:21.984805 | orchestrator | 2025-07-28 00:19:21 - testbed 2025-07-28 00:19:23.074713 | orchestrator | 2025-07-28 00:19:23 - clean up subnets 2025-07-28 00:19:23.119859 | orchestrator | 2025-07-28 00:19:23 - subnet-testbed-management 2025-07-28 00:19:23.304200 | orchestrator | 2025-07-28 00:19:23 - clean up networks 2025-07-28 00:19:23.484583 | orchestrator | 2025-07-28 00:19:23 - net-testbed-management 2025-07-28 00:19:23.765727 | orchestrator | 2025-07-28 00:19:23 - clean up security groups 2025-07-28 00:19:23.821147 | orchestrator | 2025-07-28 00:19:23 - testbed-management 2025-07-28 00:19:23.940816 | orchestrator | 2025-07-28 00:19:23 - testbed-node 2025-07-28 00:19:24.056106 | orchestrator | 2025-07-28 00:19:24 - clean up floating ips 2025-07-28 00:19:24.096841 | orchestrator | 2025-07-28 00:19:24 - 81.163.193.61 2025-07-28 00:19:24.441525 | orchestrator | 2025-07-28 00:19:24 - clean up routers 2025-07-28 00:19:24.548742 | orchestrator | 2025-07-28 00:19:24 - testbed 2025-07-28 00:19:25.968728 | orchestrator | ok: Runtime: 0:00:20.603898 2025-07-28 00:19:25.973650 | 2025-07-28 00:19:25.973813 | PLAY RECAP 2025-07-28 00:19:25.973938 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-28 00:19:25.973999 | 2025-07-28 00:19:26.126002 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-28 00:19:26.128593 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-28 00:19:26.885832 | 2025-07-28 00:19:26.886016 | PLAY [Cleanup play] 2025-07-28 00:19:26.903230 | 2025-07-28 00:19:26.903435 | TASK [Set cloud fact (Zuul deployment)] 2025-07-28 00:19:26.971774 | orchestrator | ok 2025-07-28 00:19:26.984374 | 2025-07-28 00:19:26.984629 | TASK [Set cloud fact (local deployment)] 2025-07-28 00:19:27.020128 | orchestrator | skipping: Conditional result was False 2025-07-28 00:19:27.036609 | 2025-07-28 00:19:27.036789 | TASK [Clean the cloud environment] 2025-07-28 00:19:28.264294 | orchestrator | 2025-07-28 00:19:28 - clean up servers 2025-07-28 00:19:28.730944 | orchestrator | 2025-07-28 00:19:28 - clean up keypairs 2025-07-28 00:19:28.748003 | orchestrator | 2025-07-28 00:19:28 - wait for servers to be gone 2025-07-28 00:19:28.797111 | orchestrator | 2025-07-28 00:19:28 - clean up ports 2025-07-28 00:19:28.890364 | orchestrator | 2025-07-28 00:19:28 - clean up volumes 2025-07-28 00:19:28.968094 | orchestrator | 2025-07-28 00:19:28 - disconnect routers 2025-07-28 00:19:28.998126 | orchestrator | 2025-07-28 00:19:28 - clean up subnets 2025-07-28 00:19:29.033514 | orchestrator | 2025-07-28 00:19:29 - clean up networks 2025-07-28 00:19:29.190422 | orchestrator | 2025-07-28 00:19:29 - clean up security groups 2025-07-28 00:19:29.230688 | orchestrator | 2025-07-28 00:19:29 - clean up floating ips 2025-07-28 00:19:29.257642 | orchestrator | 2025-07-28 00:19:29 - clean up routers 2025-07-28 00:19:29.578426 | orchestrator | ok: Runtime: 0:00:01.411284 2025-07-28 00:19:29.582329 | 2025-07-28 00:19:29.582502 | PLAY RECAP 2025-07-28 00:19:29.582627 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-28 00:19:29.582690 | 2025-07-28 00:19:29.708541 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-28 00:19:29.709601 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-28 00:19:30.472139 | 2025-07-28 00:19:30.472347 | PLAY [Base post-fetch] 2025-07-28 00:19:30.487909 | 2025-07-28 00:19:30.488042 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-28 00:19:30.554013 | orchestrator | skipping: Conditional result was False 2025-07-28 00:19:30.567838 | 2025-07-28 00:19:30.568052 | TASK [fetch-output : Set log path for single node] 2025-07-28 00:19:30.627425 | orchestrator | ok 2025-07-28 00:19:30.636806 | 2025-07-28 00:19:30.636955 | LOOP [fetch-output : Ensure local output dirs] 2025-07-28 00:19:31.134070 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/work/logs" 2025-07-28 00:19:31.419045 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/work/artifacts" 2025-07-28 00:19:31.706329 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/469a9d5ea7f2474388e6b39baa9a81ff/work/docs" 2025-07-28 00:19:31.728591 | 2025-07-28 00:19:31.728774 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-28 00:19:32.745450 | orchestrator | changed: .d..t...... ./ 2025-07-28 00:19:32.745697 | orchestrator | changed: All items complete 2025-07-28 00:19:32.745733 | 2025-07-28 00:19:33.459773 | orchestrator | changed: .d..t...... ./ 2025-07-28 00:19:34.189763 | orchestrator | changed: .d..t...... ./ 2025-07-28 00:19:34.204862 | 2025-07-28 00:19:34.205003 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-28 00:19:34.239994 | orchestrator | skipping: Conditional result was False 2025-07-28 00:19:34.243998 | orchestrator | skipping: Conditional result was False 2025-07-28 00:19:34.260188 | 2025-07-28 00:19:34.260290 | PLAY RECAP 2025-07-28 00:19:34.260341 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-28 00:19:34.260367 | 2025-07-28 00:19:34.386676 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-28 00:19:34.389446 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-28 00:19:35.251362 | 2025-07-28 00:19:35.251530 | PLAY [Base post] 2025-07-28 00:19:35.275105 | 2025-07-28 00:19:35.275383 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-28 00:19:36.697505 | orchestrator | changed 2025-07-28 00:19:36.705766 | 2025-07-28 00:19:36.705881 | PLAY RECAP 2025-07-28 00:19:36.705945 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-28 00:19:36.706010 | 2025-07-28 00:19:36.826014 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-28 00:19:36.827052 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-28 00:19:37.680951 | 2025-07-28 00:19:37.681161 | PLAY [Base post-logs] 2025-07-28 00:19:37.692468 | 2025-07-28 00:19:37.692629 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-28 00:19:38.160413 | localhost | changed 2025-07-28 00:19:38.177715 | 2025-07-28 00:19:38.177926 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-28 00:19:38.216197 | localhost | ok 2025-07-28 00:19:38.222773 | 2025-07-28 00:19:38.222973 | TASK [Set zuul-log-path fact] 2025-07-28 00:19:38.240359 | localhost | ok 2025-07-28 00:19:38.252606 | 2025-07-28 00:19:38.252775 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-28 00:19:38.291664 | localhost | ok 2025-07-28 00:19:38.298594 | 2025-07-28 00:19:38.298771 | TASK [upload-logs : Create log directories] 2025-07-28 00:19:38.874312 | localhost | changed 2025-07-28 00:19:38.878501 | 2025-07-28 00:19:38.878642 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-28 00:19:39.402011 | localhost -> localhost | ok: Runtime: 0:00:00.006547 2025-07-28 00:19:39.411232 | 2025-07-28 00:19:39.411448 | TASK [upload-logs : Upload logs to log server] 2025-07-28 00:19:39.983809 | localhost | Output suppressed because no_log was given 2025-07-28 00:19:39.988045 | 2025-07-28 00:19:39.988220 | LOOP [upload-logs : Compress console log and json output] 2025-07-28 00:19:40.046331 | localhost | skipping: Conditional result was False 2025-07-28 00:19:40.051172 | localhost | skipping: Conditional result was False 2025-07-28 00:19:40.065082 | 2025-07-28 00:19:40.065368 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-28 00:19:40.112677 | localhost | skipping: Conditional result was False 2025-07-28 00:19:40.113284 | 2025-07-28 00:19:40.117884 | localhost | skipping: Conditional result was False 2025-07-28 00:19:40.130591 | 2025-07-28 00:19:40.130882 | LOOP [upload-logs : Upload console log and json output]