2025-07-23 00:00:11.543727 | Job console starting 2025-07-23 00:00:11.554964 | Updating git repos 2025-07-23 00:00:11.738825 | Cloning repos into workspace 2025-07-23 00:00:12.001953 | Restoring repo states 2025-07-23 00:00:12.040515 | Merging changes 2025-07-23 00:00:12.040532 | Checking out repos 2025-07-23 00:00:12.619173 | Preparing playbooks 2025-07-23 00:00:13.151957 | Running Ansible setup 2025-07-23 00:00:19.697372 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-23 00:00:21.475577 | 2025-07-23 00:00:21.475692 | PLAY [Base pre] 2025-07-23 00:00:21.525178 | 2025-07-23 00:00:21.525299 | TASK [Setup log path fact] 2025-07-23 00:00:21.572207 | orchestrator | ok 2025-07-23 00:00:21.648363 | 2025-07-23 00:00:21.648530 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-23 00:00:21.682393 | orchestrator | ok 2025-07-23 00:00:21.714970 | 2025-07-23 00:00:21.715095 | TASK [emit-job-header : Print job information] 2025-07-23 00:00:21.804958 | # Job Information 2025-07-23 00:00:21.805130 | Ansible Version: 2.16.14 2025-07-23 00:00:21.805166 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-07-23 00:00:21.805199 | Pipeline: periodic-midnight 2025-07-23 00:00:21.805221 | Executor: 521e9411259a 2025-07-23 00:00:21.805241 | Triggered by: https://github.com/osism/testbed 2025-07-23 00:00:21.805272 | Event ID: 6e9186d3d91248c180879bb2c097748a 2025-07-23 00:00:21.820398 | 2025-07-23 00:00:21.820516 | LOOP [emit-job-header : Print node information] 2025-07-23 00:00:22.036228 | orchestrator | ok: 2025-07-23 00:00:22.036458 | orchestrator | # Node Information 2025-07-23 00:00:22.036488 | orchestrator | Inventory Hostname: orchestrator 2025-07-23 00:00:22.036509 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-23 00:00:22.036527 | orchestrator | Username: zuul-testbed05 2025-07-23 00:00:22.036543 | orchestrator | Distro: Debian 12.11 2025-07-23 00:00:22.036562 | orchestrator | Provider: static-testbed 2025-07-23 00:00:22.036580 | orchestrator | Region: 2025-07-23 00:00:22.036597 | orchestrator | Label: testbed-orchestrator 2025-07-23 00:00:22.036621 | orchestrator | Product Name: OpenStack Nova 2025-07-23 00:00:22.036642 | orchestrator | Interface IP: 81.163.193.140 2025-07-23 00:00:22.049474 | 2025-07-23 00:00:22.049583 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-23 00:00:22.906699 | orchestrator -> localhost | changed 2025-07-23 00:00:22.913943 | 2025-07-23 00:00:22.914034 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-23 00:00:25.259672 | orchestrator -> localhost | changed 2025-07-23 00:00:25.280940 | 2025-07-23 00:00:25.281050 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-23 00:00:25.715812 | orchestrator -> localhost | ok 2025-07-23 00:00:25.722086 | 2025-07-23 00:00:25.722179 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-23 00:00:25.739473 | orchestrator | ok 2025-07-23 00:00:25.756913 | orchestrator | included: /var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-23 00:00:25.767837 | 2025-07-23 00:00:25.767920 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-23 00:00:28.637255 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-23 00:00:28.637438 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/work/a3993629ed3b48dd8b53a7af2a9a5d47_id_rsa 2025-07-23 00:00:28.637470 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/work/a3993629ed3b48dd8b53a7af2a9a5d47_id_rsa.pub 2025-07-23 00:00:28.637491 | orchestrator -> localhost | The key fingerprint is: 2025-07-23 00:00:28.637511 | orchestrator -> localhost | SHA256:wBmCBfn/jY8+aMn37tYfOQm7ZK8pP2OJa7GII9cj6og zuul-build-sshkey 2025-07-23 00:00:28.637531 | orchestrator -> localhost | The key's randomart image is: 2025-07-23 00:00:28.637562 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-23 00:00:28.637580 | orchestrator -> localhost | | .=o . | 2025-07-23 00:00:28.637599 | orchestrator -> localhost | | o o o | 2025-07-23 00:00:28.637616 | orchestrator -> localhost | | . + | 2025-07-23 00:00:28.637633 | orchestrator -> localhost | | . . | 2025-07-23 00:00:28.637649 | orchestrator -> localhost | | . S . | 2025-07-23 00:00:28.637671 | orchestrator -> localhost | | . . o o | 2025-07-23 00:00:28.637689 | orchestrator -> localhost | | . * + =+.= | 2025-07-23 00:00:28.637707 | orchestrator -> localhost | | . .. X O.Bo*+ o | 2025-07-23 00:00:28.637725 | orchestrator -> localhost | |E ..o= =oX*===o | 2025-07-23 00:00:28.637742 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-23 00:00:28.637785 | orchestrator -> localhost | ok: Runtime: 0:00:01.764119 2025-07-23 00:00:28.643837 | 2025-07-23 00:00:28.643924 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-23 00:00:28.681494 | orchestrator | ok 2025-07-23 00:00:28.712591 | orchestrator | included: /var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-23 00:00:28.727528 | 2025-07-23 00:00:28.727624 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-23 00:00:28.761179 | orchestrator | skipping: Conditional result was False 2025-07-23 00:00:28.767887 | 2025-07-23 00:00:28.767986 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-23 00:00:29.596426 | orchestrator | changed 2025-07-23 00:00:29.601505 | 2025-07-23 00:00:29.601580 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-23 00:00:29.893017 | orchestrator | ok 2025-07-23 00:00:29.903091 | 2025-07-23 00:00:29.903184 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-23 00:00:30.453492 | orchestrator | ok 2025-07-23 00:00:30.458373 | 2025-07-23 00:00:30.458471 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-23 00:00:30.944330 | orchestrator | ok 2025-07-23 00:00:30.964015 | 2025-07-23 00:00:30.964121 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-23 00:00:30.999208 | orchestrator | skipping: Conditional result was False 2025-07-23 00:00:31.004803 | 2025-07-23 00:00:31.004894 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-23 00:00:31.881402 | orchestrator -> localhost | changed 2025-07-23 00:00:31.892570 | 2025-07-23 00:00:31.892658 | TASK [add-build-sshkey : Add back temp key] 2025-07-23 00:00:32.977331 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/work/a3993629ed3b48dd8b53a7af2a9a5d47_id_rsa (zuul-build-sshkey) 2025-07-23 00:00:32.977558 | orchestrator -> localhost | ok: Runtime: 0:00:00.021472 2025-07-23 00:00:32.983520 | 2025-07-23 00:00:32.983620 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-23 00:00:33.506664 | orchestrator | ok 2025-07-23 00:00:33.511632 | 2025-07-23 00:00:33.511715 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-23 00:00:33.540307 | orchestrator | skipping: Conditional result was False 2025-07-23 00:00:33.632599 | 2025-07-23 00:00:33.632693 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-23 00:00:34.150678 | orchestrator | ok 2025-07-23 00:00:34.169990 | 2025-07-23 00:00:34.170091 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-23 00:00:34.245957 | orchestrator | ok 2025-07-23 00:00:34.254775 | 2025-07-23 00:00:34.254878 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-23 00:00:34.879775 | orchestrator -> localhost | ok 2025-07-23 00:00:34.886024 | 2025-07-23 00:00:34.886110 | TASK [validate-host : Collect information about the host] 2025-07-23 00:00:36.098274 | orchestrator | ok 2025-07-23 00:00:36.123764 | 2025-07-23 00:00:36.123883 | TASK [validate-host : Sanitize hostname] 2025-07-23 00:00:36.207404 | orchestrator | ok 2025-07-23 00:00:36.212711 | 2025-07-23 00:00:36.212806 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-23 00:00:36.955127 | orchestrator -> localhost | changed 2025-07-23 00:00:36.961410 | 2025-07-23 00:00:36.961514 | TASK [validate-host : Collect information about zuul worker] 2025-07-23 00:00:37.399895 | orchestrator | ok 2025-07-23 00:00:37.407089 | 2025-07-23 00:00:37.407198 | TASK [validate-host : Write out all zuul information for each host] 2025-07-23 00:00:38.408310 | orchestrator -> localhost | changed 2025-07-23 00:00:38.419346 | 2025-07-23 00:00:38.419452 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-23 00:00:38.723354 | orchestrator | ok 2025-07-23 00:00:38.729077 | 2025-07-23 00:00:38.729185 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-23 00:01:16.913121 | orchestrator | changed: 2025-07-23 00:01:16.913385 | orchestrator | .d..t...... src/ 2025-07-23 00:01:16.913426 | orchestrator | .d..t...... src/github.com/ 2025-07-23 00:01:16.913454 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-23 00:01:16.913477 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-23 00:01:16.913499 | orchestrator | RedHat.yml 2025-07-23 00:01:16.927169 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-23 00:01:16.927362 | orchestrator | RedHat.yml 2025-07-23 00:01:16.927429 | orchestrator | = 1.53.0"... 2025-07-23 00:01:31.421537 | orchestrator | 00:01:31.421 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-23 00:01:31.450576 | orchestrator | 00:01:31.450 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-23 00:01:31.942154 | orchestrator | 00:01:31.941 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-07-23 00:01:32.813422 | orchestrator | 00:01:32.813 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-07-23 00:01:33.215323 | orchestrator | 00:01:33.215 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-23 00:01:33.936186 | orchestrator | 00:01:33.935 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-23 00:01:34.012572 | orchestrator | 00:01:34.012 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-23 00:01:34.502838 | orchestrator | 00:01:34.502 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-23 00:01:34.502917 | orchestrator | 00:01:34.502 STDOUT terraform: Providers are signed by their developers. 2025-07-23 00:01:34.502924 | orchestrator | 00:01:34.502 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-23 00:01:34.502954 | orchestrator | 00:01:34.502 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-23 00:01:34.503045 | orchestrator | 00:01:34.502 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-23 00:01:34.503107 | orchestrator | 00:01:34.503 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-23 00:01:34.503152 | orchestrator | 00:01:34.503 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-23 00:01:34.503169 | orchestrator | 00:01:34.503 STDOUT terraform: you run "tofu init" in the future. 2025-07-23 00:01:34.503864 | orchestrator | 00:01:34.503 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-23 00:01:34.503977 | orchestrator | 00:01:34.503 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-23 00:01:34.504004 | orchestrator | 00:01:34.503 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-23 00:01:34.504023 | orchestrator | 00:01:34.503 STDOUT terraform: should now work. 2025-07-23 00:01:34.504040 | orchestrator | 00:01:34.503 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-23 00:01:34.504064 | orchestrator | 00:01:34.503 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-23 00:01:34.504100 | orchestrator | 00:01:34.503 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-23 00:01:34.618374 | orchestrator | 00:01:34.616 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-23 00:01:34.618496 | orchestrator | 00:01:34.616 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-23 00:01:34.896555 | orchestrator | 00:01:34.896 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-23 00:01:34.896617 | orchestrator | 00:01:34.896 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-23 00:01:34.896624 | orchestrator | 00:01:34.896 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-23 00:01:34.896629 | orchestrator | 00:01:34.896 STDOUT terraform: for this configuration. 2025-07-23 00:01:35.039959 | orchestrator | 00:01:35.038 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-23 00:01:35.040050 | orchestrator | 00:01:35.038 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-23 00:01:35.183111 | orchestrator | 00:01:35.182 STDOUT terraform: ci.auto.tfvars 2025-07-23 00:01:35.186034 | orchestrator | 00:01:35.185 STDOUT terraform: default_custom.tf 2025-07-23 00:01:35.357148 | orchestrator | 00:01:35.357 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-07-23 00:01:36.537123 | orchestrator | 00:01:36.536 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-23 00:01:37.071652 | orchestrator | 00:01:37.071 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-23 00:01:37.323785 | orchestrator | 00:01:37.323 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-23 00:01:37.323875 | orchestrator | 00:01:37.323 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-23 00:01:37.323888 | orchestrator | 00:01:37.323 STDOUT terraform:  + create 2025-07-23 00:01:37.323897 | orchestrator | 00:01:37.323 STDOUT terraform:  <= read (data resources) 2025-07-23 00:01:37.323906 | orchestrator | 00:01:37.323 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-23 00:01:37.323915 | orchestrator | 00:01:37.323 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-23 00:01:37.323922 | orchestrator | 00:01:37.323 STDOUT terraform:  # (config refers to values not yet known) 2025-07-23 00:01:37.323930 | orchestrator | 00:01:37.323 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-23 00:01:37.323962 | orchestrator | 00:01:37.323 STDOUT terraform:  + checksum = (known after apply) 2025-07-23 00:01:37.323992 | orchestrator | 00:01:37.323 STDOUT terraform:  + created_at = (known after apply) 2025-07-23 00:01:37.324020 | orchestrator | 00:01:37.323 STDOUT terraform:  + file = (known after apply) 2025-07-23 00:01:37.324051 | orchestrator | 00:01:37.324 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.324082 | orchestrator | 00:01:37.324 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.324118 | orchestrator | 00:01:37.324 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-23 00:01:37.324158 | orchestrator | 00:01:37.324 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-23 00:01:37.324180 | orchestrator | 00:01:37.324 STDOUT terraform:  + most_recent = true 2025-07-23 00:01:37.324210 | orchestrator | 00:01:37.324 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.324240 | orchestrator | 00:01:37.324 STDOUT terraform:  + protected = (known after apply) 2025-07-23 00:01:37.324271 | orchestrator | 00:01:37.324 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.324304 | orchestrator | 00:01:37.324 STDOUT terraform:  + schema = (known after apply) 2025-07-23 00:01:37.324334 | orchestrator | 00:01:37.324 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-23 00:01:37.324360 | orchestrator | 00:01:37.324 STDOUT terraform:  + tags = (known after apply) 2025-07-23 00:01:37.324389 | orchestrator | 00:01:37.324 STDOUT terraform:  + updated_at = (known after apply) 2025-07-23 00:01:37.324403 | orchestrator | 00:01:37.324 STDOUT terraform:  } 2025-07-23 00:01:37.324480 | orchestrator | 00:01:37.324 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-23 00:01:37.324520 | orchestrator | 00:01:37.324 STDOUT terraform:  # (config refers to values not yet known) 2025-07-23 00:01:37.324556 | orchestrator | 00:01:37.324 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-23 00:01:37.324584 | orchestrator | 00:01:37.324 STDOUT terraform:  + checksum = (known after apply) 2025-07-23 00:01:37.324612 | orchestrator | 00:01:37.324 STDOUT terraform:  + created_at = (known after apply) 2025-07-23 00:01:37.324647 | orchestrator | 00:01:37.324 STDOUT terraform:  + file = (known after apply) 2025-07-23 00:01:37.324670 | orchestrator | 00:01:37.324 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.324698 | orchestrator | 00:01:37.324 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.324725 | orchestrator | 00:01:37.324 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-23 00:01:37.324753 | orchestrator | 00:01:37.324 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-23 00:01:37.324772 | orchestrator | 00:01:37.324 STDOUT terraform:  + most_recent = true 2025-07-23 00:01:37.324802 | orchestrator | 00:01:37.324 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.324831 | orchestrator | 00:01:37.324 STDOUT terraform:  + protected = (known after apply) 2025-07-23 00:01:37.324861 | orchestrator | 00:01:37.324 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.324889 | orchestrator | 00:01:37.324 STDOUT terraform:  + schema = (known after apply) 2025-07-23 00:01:37.324917 | orchestrator | 00:01:37.324 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-23 00:01:37.324944 | orchestrator | 00:01:37.324 STDOUT terraform:  + tags = (known after apply) 2025-07-23 00:01:37.324972 | orchestrator | 00:01:37.324 STDOUT terraform:  + updated_at = (known after apply) 2025-07-23 00:01:37.324985 | orchestrator | 00:01:37.324 STDOUT terraform:  } 2025-07-23 00:01:37.325027 | orchestrator | 00:01:37.324 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-23 00:01:37.325055 | orchestrator | 00:01:37.325 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-23 00:01:37.325089 | orchestrator | 00:01:37.325 STDOUT terraform:  + content = (known after apply) 2025-07-23 00:01:37.325125 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-23 00:01:37.325158 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-23 00:01:37.325193 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-23 00:01:37.325227 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-23 00:01:37.325259 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-23 00:01:37.325293 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-23 00:01:37.325318 | orchestrator | 00:01:37.325 STDOUT terraform:  + directory_permission = "0777" 2025-07-23 00:01:37.325340 | orchestrator | 00:01:37.325 STDOUT terraform:  + file_permission = "0644" 2025-07-23 00:01:37.325374 | orchestrator | 00:01:37.325 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-23 00:01:37.325409 | orchestrator | 00:01:37.325 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.325424 | orchestrator | 00:01:37.325 STDOUT terraform:  } 2025-07-23 00:01:37.325463 | orchestrator | 00:01:37.325 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-23 00:01:37.325485 | orchestrator | 00:01:37.325 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-23 00:01:37.325519 | orchestrator | 00:01:37.325 STDOUT terraform:  + content = (known after apply) 2025-07-23 00:01:37.325554 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-23 00:01:37.325587 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-23 00:01:37.325621 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-23 00:01:37.325662 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-23 00:01:37.325705 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-23 00:01:37.325742 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-23 00:01:37.325761 | orchestrator | 00:01:37.325 STDOUT terraform:  + directory_permission = "0777" 2025-07-23 00:01:37.325784 | orchestrator | 00:01:37.325 STDOUT terraform:  + file_permission = "0644" 2025-07-23 00:01:37.325815 | orchestrator | 00:01:37.325 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-23 00:01:37.325849 | orchestrator | 00:01:37.325 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.325863 | orchestrator | 00:01:37.325 STDOUT terraform:  } 2025-07-23 00:01:37.325887 | orchestrator | 00:01:37.325 STDOUT terraform:  # local_file.inventory will be created 2025-07-23 00:01:37.325912 | orchestrator | 00:01:37.325 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-23 00:01:37.325948 | orchestrator | 00:01:37.325 STDOUT terraform:  + content = (known after apply) 2025-07-23 00:01:37.325983 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-23 00:01:37.326032 | orchestrator | 00:01:37.325 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-23 00:01:37.326068 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-23 00:01:37.326103 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-23 00:01:37.326137 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-23 00:01:37.326172 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-23 00:01:37.326197 | orchestrator | 00:01:37.326 STDOUT terraform:  + directory_permission = "0777" 2025-07-23 00:01:37.326220 | orchestrator | 00:01:37.326 STDOUT terraform:  + file_permission = "0644" 2025-07-23 00:01:37.326249 | orchestrator | 00:01:37.326 STDOUT terraform:  + filename = "inventory.ci" 2025-07-23 00:01:37.326288 | orchestrator | 00:01:37.326 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.326294 | orchestrator | 00:01:37.326 STDOUT terraform:  } 2025-07-23 00:01:37.326325 | orchestrator | 00:01:37.326 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-23 00:01:37.326353 | orchestrator | 00:01:37.326 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-23 00:01:37.326385 | orchestrator | 00:01:37.326 STDOUT terraform:  + content = (sensitive value) 2025-07-23 00:01:37.326419 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-23 00:01:37.326489 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-23 00:01:37.326513 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-23 00:01:37.326546 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-23 00:01:37.326582 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-23 00:01:37.326616 | orchestrator | 00:01:37.326 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-23 00:01:37.326640 | orchestrator | 00:01:37.326 STDOUT terraform:  + directory_permission = "0700" 2025-07-23 00:01:37.326665 | orchestrator | 00:01:37.326 STDOUT terraform:  + file_permission = "0600" 2025-07-23 00:01:37.326695 | orchestrator | 00:01:37.326 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-23 00:01:37.326736 | orchestrator | 00:01:37.326 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.326743 | orchestrator | 00:01:37.326 STDOUT terraform:  } 2025-07-23 00:01:37.326774 | orchestrator | 00:01:37.326 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-23 00:01:37.326813 | orchestrator | 00:01:37.326 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-23 00:01:37.326845 | orchestrator | 00:01:37.326 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.326866 | orchestrator | 00:01:37.326 STDOUT terraform:  } 2025-07-23 00:01:37.326916 | orchestrator | 00:01:37.326 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-23 00:01:37.326959 | orchestrator | 00:01:37.326 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-23 00:01:37.326994 | orchestrator | 00:01:37.326 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.327020 | orchestrator | 00:01:37.326 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.327056 | orchestrator | 00:01:37.327 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.327091 | orchestrator | 00:01:37.327 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.327126 | orchestrator | 00:01:37.327 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.327169 | orchestrator | 00:01:37.327 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-23 00:01:37.327205 | orchestrator | 00:01:37.327 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.327227 | orchestrator | 00:01:37.327 STDOUT terraform:  + size = 80 2025-07-23 00:01:37.327252 | orchestrator | 00:01:37.327 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.327275 | orchestrator | 00:01:37.327 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.327281 | orchestrator | 00:01:37.327 STDOUT terraform:  } 2025-07-23 00:01:37.327379 | orchestrator | 00:01:37.327 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-23 00:01:37.327422 | orchestrator | 00:01:37.327 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-23 00:01:37.327487 | orchestrator | 00:01:37.327 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.327511 | orchestrator | 00:01:37.327 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.327547 | orchestrator | 00:01:37.327 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.327582 | orchestrator | 00:01:37.327 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.327618 | orchestrator | 00:01:37.327 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.327662 | orchestrator | 00:01:37.327 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-23 00:01:37.327698 | orchestrator | 00:01:37.327 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.327720 | orchestrator | 00:01:37.327 STDOUT terraform:  + size = 80 2025-07-23 00:01:37.327744 | orchestrator | 00:01:37.327 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.327768 | orchestrator | 00:01:37.327 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.327784 | orchestrator | 00:01:37.327 STDOUT terraform:  } 2025-07-23 00:01:37.327829 | orchestrator | 00:01:37.327 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-23 00:01:37.327876 | orchestrator | 00:01:37.327 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-23 00:01:37.327911 | orchestrator | 00:01:37.327 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.327934 | orchestrator | 00:01:37.327 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.327970 | orchestrator | 00:01:37.327 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.328003 | orchestrator | 00:01:37.327 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.328037 | orchestrator | 00:01:37.328 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.328083 | orchestrator | 00:01:37.328 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-23 00:01:37.328114 | orchestrator | 00:01:37.328 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.328136 | orchestrator | 00:01:37.328 STDOUT terraform:  + size = 80 2025-07-23 00:01:37.328161 | orchestrator | 00:01:37.328 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.328183 | orchestrator | 00:01:37.328 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.328189 | orchestrator | 00:01:37.328 STDOUT terraform:  } 2025-07-23 00:01:37.328238 | orchestrator | 00:01:37.328 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-23 00:01:37.328282 | orchestrator | 00:01:37.328 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-23 00:01:37.328318 | orchestrator | 00:01:37.328 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.328339 | orchestrator | 00:01:37.328 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.328374 | orchestrator | 00:01:37.328 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.328409 | orchestrator | 00:01:37.328 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.328463 | orchestrator | 00:01:37.328 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.328517 | orchestrator | 00:01:37.328 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-23 00:01:37.328551 | orchestrator | 00:01:37.328 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.328602 | orchestrator | 00:01:37.328 STDOUT terraform:  + size = 80 2025-07-23 00:01:37.328627 | orchestrator | 00:01:37.328 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.328652 | orchestrator | 00:01:37.328 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.328666 | orchestrator | 00:01:37.328 STDOUT terraform:  } 2025-07-23 00:01:37.328711 | orchestrator | 00:01:37.328 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-23 00:01:37.328755 | orchestrator | 00:01:37.328 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-23 00:01:37.328789 | orchestrator | 00:01:37.328 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.328827 | orchestrator | 00:01:37.328 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.328863 | orchestrator | 00:01:37.328 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.328906 | orchestrator | 00:01:37.328 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.328941 | orchestrator | 00:01:37.328 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.328985 | orchestrator | 00:01:37.328 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-23 00:01:37.329021 | orchestrator | 00:01:37.328 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.329040 | orchestrator | 00:01:37.329 STDOUT terraform:  + size = 80 2025-07-23 00:01:37.329063 | orchestrator | 00:01:37.329 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.329086 | orchestrator | 00:01:37.329 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.329102 | orchestrator | 00:01:37.329 STDOUT terraform:  } 2025-07-23 00:01:37.329146 | orchestrator | 00:01:37.329 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-23 00:01:37.329190 | orchestrator | 00:01:37.329 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-23 00:01:37.329223 | orchestrator | 00:01:37.329 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.329248 | orchestrator | 00:01:37.329 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.329284 | orchestrator | 00:01:37.329 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.329318 | orchestrator | 00:01:37.329 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.329352 | orchestrator | 00:01:37.329 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.329393 | orchestrator | 00:01:37.329 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-23 00:01:37.329428 | orchestrator | 00:01:37.329 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.329482 | orchestrator | 00:01:37.329 STDOUT terraform:  + size = 80 2025-07-23 00:01:37.329507 | orchestrator | 00:01:37.329 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.329532 | orchestrator | 00:01:37.329 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.329547 | orchestrator | 00:01:37.329 STDOUT terraform:  } 2025-07-23 00:01:37.329597 | orchestrator | 00:01:37.329 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-23 00:01:37.329640 | orchestrator | 00:01:37.329 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-23 00:01:37.329673 | orchestrator | 00:01:37.329 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.329698 | orchestrator | 00:01:37.329 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.329739 | orchestrator | 00:01:37.329 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.329771 | orchestrator | 00:01:37.329 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.329805 | orchestrator | 00:01:37.329 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.329846 | orchestrator | 00:01:37.329 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-23 00:01:37.329882 | orchestrator | 00:01:37.329 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.329900 | orchestrator | 00:01:37.329 STDOUT terraform:  + size = 80 2025-07-23 00:01:37.329923 | orchestrator | 00:01:37.329 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.329973 | orchestrator | 00:01:37.329 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.329979 | orchestrator | 00:01:37.329 STDOUT terraform:  } 2025-07-23 00:01:37.329999 | orchestrator | 00:01:37.329 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-23 00:01:37.330058 | orchestrator | 00:01:37.329 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.330093 | orchestrator | 00:01:37.330 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.330119 | orchestrator | 00:01:37.330 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.330156 | orchestrator | 00:01:37.330 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.330193 | orchestrator | 00:01:37.330 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.330230 | orchestrator | 00:01:37.330 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-23 00:01:37.330264 | orchestrator | 00:01:37.330 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.330286 | orchestrator | 00:01:37.330 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.330311 | orchestrator | 00:01:37.330 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.330335 | orchestrator | 00:01:37.330 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.330349 | orchestrator | 00:01:37.330 STDOUT terraform:  } 2025-07-23 00:01:37.330393 | orchestrator | 00:01:37.330 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-23 00:01:37.330435 | orchestrator | 00:01:37.330 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.330482 | orchestrator | 00:01:37.330 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.330504 | orchestrator | 00:01:37.330 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.330539 | orchestrator | 00:01:37.330 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.330574 | orchestrator | 00:01:37.330 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.330610 | orchestrator | 00:01:37.330 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-23 00:01:37.330644 | orchestrator | 00:01:37.330 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.330666 | orchestrator | 00:01:37.330 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.330691 | orchestrator | 00:01:37.330 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.330713 | orchestrator | 00:01:37.330 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.330719 | orchestrator | 00:01:37.330 STDOUT terraform:  } 2025-07-23 00:01:37.330762 | orchestrator | 00:01:37.330 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-23 00:01:37.330803 | orchestrator | 00:01:37.330 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.330836 | orchestrator | 00:01:37.330 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.330859 | orchestrator | 00:01:37.330 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.330893 | orchestrator | 00:01:37.330 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.330927 | orchestrator | 00:01:37.330 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.330972 | orchestrator | 00:01:37.330 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-23 00:01:37.331009 | orchestrator | 00:01:37.330 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.331028 | orchestrator | 00:01:37.331 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.331053 | orchestrator | 00:01:37.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.331076 | orchestrator | 00:01:37.331 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.331082 | orchestrator | 00:01:37.331 STDOUT terraform:  } 2025-07-23 00:01:37.331128 | orchestrator | 00:01:37.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-23 00:01:37.331168 | orchestrator | 00:01:37.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.331202 | orchestrator | 00:01:37.331 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.331225 | orchestrator | 00:01:37.331 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.331260 | orchestrator | 00:01:37.331 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.331295 | orchestrator | 00:01:37.331 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.331331 | orchestrator | 00:01:37.331 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-23 00:01:37.331368 | orchestrator | 00:01:37.331 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.331388 | orchestrator | 00:01:37.331 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.331410 | orchestrator | 00:01:37.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.331433 | orchestrator | 00:01:37.331 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.331439 | orchestrator | 00:01:37.331 STDOUT terraform:  } 2025-07-23 00:01:37.331497 | orchestrator | 00:01:37.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-23 00:01:37.331537 | orchestrator | 00:01:37.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.331571 | orchestrator | 00:01:37.331 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.331593 | orchestrator | 00:01:37.331 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.331630 | orchestrator | 00:01:37.331 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.331664 | orchestrator | 00:01:37.331 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.331703 | orchestrator | 00:01:37.331 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-23 00:01:37.331735 | orchestrator | 00:01:37.331 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.331755 | orchestrator | 00:01:37.331 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.331780 | orchestrator | 00:01:37.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.331802 | orchestrator | 00:01:37.331 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.331808 | orchestrator | 00:01:37.331 STDOUT terraform:  } 2025-07-23 00:01:37.331853 | orchestrator | 00:01:37.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-23 00:01:37.331893 | orchestrator | 00:01:37.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.331929 | orchestrator | 00:01:37.331 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.331951 | orchestrator | 00:01:37.331 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.331987 | orchestrator | 00:01:37.331 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.332020 | orchestrator | 00:01:37.331 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.332057 | orchestrator | 00:01:37.332 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-23 00:01:37.332096 | orchestrator | 00:01:37.332 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.332112 | orchestrator | 00:01:37.332 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.332136 | orchestrator | 00:01:37.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.332158 | orchestrator | 00:01:37.332 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.332164 | orchestrator | 00:01:37.332 STDOUT terraform:  } 2025-07-23 00:01:37.332214 | orchestrator | 00:01:37.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-23 00:01:37.332262 | orchestrator | 00:01:37.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.332285 | orchestrator | 00:01:37.332 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.332310 | orchestrator | 00:01:37.332 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.332346 | orchestrator | 00:01:37.332 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.332379 | orchestrator | 00:01:37.332 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.332416 | orchestrator | 00:01:37.332 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-23 00:01:37.332474 | orchestrator | 00:01:37.332 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.332494 | orchestrator | 00:01:37.332 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.332516 | orchestrator | 00:01:37.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.332538 | orchestrator | 00:01:37.332 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.332545 | orchestrator | 00:01:37.332 STDOUT terraform:  } 2025-07-23 00:01:37.332589 | orchestrator | 00:01:37.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-23 00:01:37.332629 | orchestrator | 00:01:37.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.332664 | orchestrator | 00:01:37.332 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.332687 | orchestrator | 00:01:37.332 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.332722 | orchestrator | 00:01:37.332 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.332755 | orchestrator | 00:01:37.332 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.332793 | orchestrator | 00:01:37.332 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-23 00:01:37.332827 | orchestrator | 00:01:37.332 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.332851 | orchestrator | 00:01:37.332 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.332869 | orchestrator | 00:01:37.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.332893 | orchestrator | 00:01:37.332 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.332899 | orchestrator | 00:01:37.332 STDOUT terraform:  } 2025-07-23 00:01:37.332944 | orchestrator | 00:01:37.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-23 00:01:37.332986 | orchestrator | 00:01:37.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-23 00:01:37.333019 | orchestrator | 00:01:37.332 STDOUT terraform:  + attachment = (known after apply) 2025-07-23 00:01:37.333043 | orchestrator | 00:01:37.333 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.333078 | orchestrator | 00:01:37.333 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.333111 | orchestrator | 00:01:37.333 STDOUT terraform:  + metadata = (known after apply) 2025-07-23 00:01:37.333148 | orchestrator | 00:01:37.333 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-23 00:01:37.333182 | orchestrator | 00:01:37.333 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.333204 | orchestrator | 00:01:37.333 STDOUT terraform:  + size = 20 2025-07-23 00:01:37.333228 | orchestrator | 00:01:37.333 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-23 00:01:37.333252 | orchestrator | 00:01:37.333 STDOUT terraform:  + volume_type = "ssd" 2025-07-23 00:01:37.333258 | orchestrator | 00:01:37.333 STDOUT terraform:  } 2025-07-23 00:01:37.333301 | orchestrator | 00:01:37.333 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-23 00:01:37.333343 | orchestrator | 00:01:37.333 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-23 00:01:37.333376 | orchestrator | 00:01:37.333 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-23 00:01:37.333410 | orchestrator | 00:01:37.333 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-23 00:01:37.333486 | orchestrator | 00:01:37.333 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-23 00:01:37.333497 | orchestrator | 00:01:37.333 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.333506 | orchestrator | 00:01:37.333 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.333519 | orchestrator | 00:01:37.333 STDOUT terraform:  + config_drive = true 2025-07-23 00:01:37.333548 | orchestrator | 00:01:37.333 STDOUT terraform:  + created = (known after apply) 2025-07-23 00:01:37.333581 | orchestrator | 00:01:37.333 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-23 00:01:37.333610 | orchestrator | 00:01:37.333 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-23 00:01:37.333633 | orchestrator | 00:01:37.333 STDOUT terraform:  + force_delete = false 2025-07-23 00:01:37.333666 | orchestrator | 00:01:37.333 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-23 00:01:37.333702 | orchestrator | 00:01:37.333 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.333737 | orchestrator | 00:01:37.333 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.333768 | orchestrator | 00:01:37.333 STDOUT terraform:  + image_name = (known after apply) 2025-07-23 00:01:37.333791 | orchestrator | 00:01:37.333 STDOUT terraform:  + key_pair = "testbed" 2025-07-23 00:01:37.333822 | orchestrator | 00:01:37.333 STDOUT terraform:  + name = "testbed-manager" 2025-07-23 00:01:37.333845 | orchestrator | 00:01:37.333 STDOUT terraform:  + power_state = "active" 2025-07-23 00:01:37.333880 | orchestrator | 00:01:37.333 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.333913 | orchestrator | 00:01:37.333 STDOUT terraform:  + security_groups = (known after apply) 2025-07-23 00:01:37.333935 | orchestrator | 00:01:37.333 STDOUT terraform:  + stop_before_destroy = false 2025-07-23 00:01:37.333969 | orchestrator | 00:01:37.333 STDOUT terraform:  + updated = (known after apply) 2025-07-23 00:01:37.334038 | orchestrator | 00:01:37.333 STDOUT terraform:  + user_data = (sensitive value) 2025-07-23 00:01:37.334057 | orchestrator | 00:01:37.334 STDOUT terraform:  + block_device { 2025-07-23 00:01:37.334080 | orchestrator | 00:01:37.334 STDOUT terraform:  + boot_index = 0 2025-07-23 00:01:37.334109 | orchestrator | 00:01:37.334 STDOUT terraform:  + delete_on_termination = false 2025-07-23 00:01:37.334143 | orchestrator | 00:01:37.334 STDOUT terraform:  + destination_type = "volume" 2025-07-23 00:01:37.334165 | orchestrator | 00:01:37.334 STDOUT terraform:  + multiattach = false 2025-07-23 00:01:37.334193 | orchestrator | 00:01:37.334 STDOUT terraform:  + source_type = "volume" 2025-07-23 00:01:37.334230 | orchestrator | 00:01:37.334 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.334244 | orchestrator | 00:01:37.334 STDOUT terraform:  } 2025-07-23 00:01:37.334258 | orchestrator | 00:01:37.334 STDOUT terraform:  + network { 2025-07-23 00:01:37.334279 | orchestrator | 00:01:37.334 STDOUT terraform:  + access_network = false 2025-07-23 00:01:37.334309 | orchestrator | 00:01:37.334 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-23 00:01:37.334341 | orchestrator | 00:01:37.334 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-23 00:01:37.334369 | orchestrator | 00:01:37.334 STDOUT terraform:  + mac = (known after apply) 2025-07-23 00:01:37.334400 | orchestrator | 00:01:37.334 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.334431 | orchestrator | 00:01:37.334 STDOUT terraform:  + port = (known after apply) 2025-07-23 00:01:37.334486 | orchestrator | 00:01:37.334 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.334493 | orchestrator | 00:01:37.334 STDOUT terraform:  } 2025-07-23 00:01:37.334510 | orchestrator | 00:01:37.334 STDOUT terraform:  } 2025-07-23 00:01:37.334551 | orchestrator | 00:01:37.334 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-23 00:01:37.334591 | orchestrator | 00:01:37.334 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-23 00:01:37.334624 | orchestrator | 00:01:37.334 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-23 00:01:37.334659 | orchestrator | 00:01:37.334 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-23 00:01:37.334691 | orchestrator | 00:01:37.334 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-23 00:01:37.334724 | orchestrator | 00:01:37.334 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.334747 | orchestrator | 00:01:37.334 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.334766 | orchestrator | 00:01:37.334 STDOUT terraform:  + config_drive = true 2025-07-23 00:01:37.334801 | orchestrator | 00:01:37.334 STDOUT terraform:  + created = (known after apply) 2025-07-23 00:01:37.334834 | orchestrator | 00:01:37.334 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-23 00:01:37.334864 | orchestrator | 00:01:37.334 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-23 00:01:37.334886 | orchestrator | 00:01:37.334 STDOUT terraform:  + force_delete = false 2025-07-23 00:01:37.334918 | orchestrator | 00:01:37.334 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-23 00:01:37.334953 | orchestrator | 00:01:37.334 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.335002 | orchestrator | 00:01:37.334 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.335021 | orchestrator | 00:01:37.334 STDOUT terraform:  + image_name = (known after apply) 2025-07-23 00:01:37.335044 | orchestrator | 00:01:37.335 STDOUT terraform:  + key_pair = "testbed" 2025-07-23 00:01:37.335075 | orchestrator | 00:01:37.335 STDOUT terraform:  + name = "testbed-node-0" 2025-07-23 00:01:37.335097 | orchestrator | 00:01:37.335 STDOUT terraform:  + power_state = "active" 2025-07-23 00:01:37.335131 | orchestrator | 00:01:37.335 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.335164 | orchestrator | 00:01:37.335 STDOUT terraform:  + security_groups = (known after apply) 2025-07-23 00:01:37.335187 | orchestrator | 00:01:37.335 STDOUT terraform:  + stop_before_destroy = false 2025-07-23 00:01:37.335220 | orchestrator | 00:01:37.335 STDOUT terraform:  + updated = (known after apply) 2025-07-23 00:01:37.335267 | orchestrator | 00:01:37.335 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-23 00:01:37.335284 | orchestrator | 00:01:37.335 STDOUT terraform:  + block_device { 2025-07-23 00:01:37.335308 | orchestrator | 00:01:37.335 STDOUT terraform:  + boot_index = 0 2025-07-23 00:01:37.335335 | orchestrator | 00:01:37.335 STDOUT terraform:  + delete_on_termination = false 2025-07-23 00:01:37.335362 | orchestrator | 00:01:37.335 STDOUT terraform:  + destination_type = "volume" 2025-07-23 00:01:37.335391 | orchestrator | 00:01:37.335 STDOUT terraform:  + multiattach = false 2025-07-23 00:01:37.335420 | orchestrator | 00:01:37.335 STDOUT terraform:  + source_type = "volume" 2025-07-23 00:01:37.335466 | orchestrator | 00:01:37.335 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.335474 | orchestrator | 00:01:37.335 STDOUT terraform:  } 2025-07-23 00:01:37.335489 | orchestrator | 00:01:37.335 STDOUT terraform:  + network { 2025-07-23 00:01:37.335511 | orchestrator | 00:01:37.335 STDOUT terraform:  + access_network = false 2025-07-23 00:01:37.335545 | orchestrator | 00:01:37.335 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-23 00:01:37.335570 | orchestrator | 00:01:37.335 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-23 00:01:37.335603 | orchestrator | 00:01:37.335 STDOUT terraform:  + mac = (known after apply) 2025-07-23 00:01:37.335634 | orchestrator | 00:01:37.335 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.335673 | orchestrator | 00:01:37.335 STDOUT terraform:  + port = (known after apply) 2025-07-23 00:01:37.335703 | orchestrator | 00:01:37.335 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.335710 | orchestrator | 00:01:37.335 STDOUT terraform:  } 2025-07-23 00:01:37.335725 | orchestrator | 00:01:37.335 STDOUT terraform:  } 2025-07-23 00:01:37.335767 | orchestrator | 00:01:37.335 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-23 00:01:37.335806 | orchestrator | 00:01:37.335 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-23 00:01:37.335840 | orchestrator | 00:01:37.335 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-23 00:01:37.335873 | orchestrator | 00:01:37.335 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-23 00:01:37.335907 | orchestrator | 00:01:37.335 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-23 00:01:37.335941 | orchestrator | 00:01:37.335 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.335964 | orchestrator | 00:01:37.335 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.335984 | orchestrator | 00:01:37.335 STDOUT terraform:  + config_drive = true 2025-07-23 00:01:37.336018 | orchestrator | 00:01:37.335 STDOUT terraform:  + created = (known after apply) 2025-07-23 00:01:37.336053 | orchestrator | 00:01:37.336 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-23 00:01:37.336081 | orchestrator | 00:01:37.336 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-23 00:01:37.336103 | orchestrator | 00:01:37.336 STDOUT terraform:  + force_delete = false 2025-07-23 00:01:37.336136 | orchestrator | 00:01:37.336 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-23 00:01:37.336171 | orchestrator | 00:01:37.336 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.336207 | orchestrator | 00:01:37.336 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.336241 | orchestrator | 00:01:37.336 STDOUT terraform:  + image_name = (known after apply) 2025-07-23 00:01:37.336265 | orchestrator | 00:01:37.336 STDOUT terraform:  + key_pair = "testbed" 2025-07-23 00:01:37.336294 | orchestrator | 00:01:37.336 STDOUT terraform:  + name = "testbed-node-1" 2025-07-23 00:01:37.336317 | orchestrator | 00:01:37.336 STDOUT terraform:  + power_state = "active" 2025-07-23 00:01:37.336352 | orchestrator | 00:01:37.336 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.336385 | orchestrator | 00:01:37.336 STDOUT terraform:  + security_groups = (known after apply) 2025-07-23 00:01:37.336406 | orchestrator | 00:01:37.336 STDOUT terraform:  + stop_before_destroy = false 2025-07-23 00:01:37.336440 | orchestrator | 00:01:37.336 STDOUT terraform:  + updated = (known after apply) 2025-07-23 00:01:37.336511 | orchestrator | 00:01:37.336 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-23 00:01:37.336527 | orchestrator | 00:01:37.336 STDOUT terraform:  + block_device { 2025-07-23 00:01:37.336550 | orchestrator | 00:01:37.336 STDOUT terraform:  + boot_index = 0 2025-07-23 00:01:37.336577 | orchestrator | 00:01:37.336 STDOUT terraform:  + delete_on_termination = false 2025-07-23 00:01:37.336605 | orchestrator | 00:01:37.336 STDOUT terraform:  + destination_type = "volume" 2025-07-23 00:01:37.336634 | orchestrator | 00:01:37.336 STDOUT terraform:  + multiattach = false 2025-07-23 00:01:37.336663 | orchestrator | 00:01:37.336 STDOUT terraform:  + source_type = "volume" 2025-07-23 00:01:37.336701 | orchestrator | 00:01:37.336 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.336708 | orchestrator | 00:01:37.336 STDOUT terraform:  } 2025-07-23 00:01:37.336725 | orchestrator | 00:01:37.336 STDOUT terraform:  + network { 2025-07-23 00:01:37.336744 | orchestrator | 00:01:37.336 STDOUT terraform:  + access_network = false 2025-07-23 00:01:37.336774 | orchestrator | 00:01:37.336 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-23 00:01:37.336804 | orchestrator | 00:01:37.336 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-23 00:01:37.336835 | orchestrator | 00:01:37.336 STDOUT terraform:  + mac = (known after apply) 2025-07-23 00:01:37.336865 | orchestrator | 00:01:37.336 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.336896 | orchestrator | 00:01:37.336 STDOUT terraform:  + port = (known after apply) 2025-07-23 00:01:37.336926 | orchestrator | 00:01:37.336 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.336933 | orchestrator | 00:01:37.336 STDOUT terraform:  } 2025-07-23 00:01:37.336948 | orchestrator | 00:01:37.336 STDOUT terraform:  } 2025-07-23 00:01:37.337066 | orchestrator | 00:01:37.337 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-23 00:01:37.337105 | orchestrator | 00:01:37.337 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-23 00:01:37.337139 | orchestrator | 00:01:37.337 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-23 00:01:37.337174 | orchestrator | 00:01:37.337 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-23 00:01:37.337207 | orchestrator | 00:01:37.337 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-23 00:01:37.337248 | orchestrator | 00:01:37.337 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.337264 | orchestrator | 00:01:37.337 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.337284 | orchestrator | 00:01:37.337 STDOUT terraform:  + config_drive = true 2025-07-23 00:01:37.337317 | orchestrator | 00:01:37.337 STDOUT terraform:  + created = (known after apply) 2025-07-23 00:01:37.337350 | orchestrator | 00:01:37.337 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-23 00:01:37.337381 | orchestrator | 00:01:37.337 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-23 00:01:37.337403 | orchestrator | 00:01:37.337 STDOUT terraform:  + force_delete = false 2025-07-23 00:01:37.337436 | orchestrator | 00:01:37.337 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-23 00:01:37.337476 | orchestrator | 00:01:37.337 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.337507 | orchestrator | 00:01:37.337 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.337541 | orchestrator | 00:01:37.337 STDOUT terraform:  + image_name = (known after apply) 2025-07-23 00:01:37.337567 | orchestrator | 00:01:37.337 STDOUT terraform:  + key_pair = "testbed" 2025-07-23 00:01:37.337597 | orchestrator | 00:01:37.337 STDOUT terraform:  + name = "testbed-node-2" 2025-07-23 00:01:37.337619 | orchestrator | 00:01:37.337 STDOUT terraform:  + power_state = "active" 2025-07-23 00:01:37.337653 | orchestrator | 00:01:37.337 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.337687 | orchestrator | 00:01:37.337 STDOUT terraform:  + security_groups = (known after apply) 2025-07-23 00:01:37.337708 | orchestrator | 00:01:37.337 STDOUT terraform:  + stop_before_destroy = false 2025-07-23 00:01:37.337745 | orchestrator | 00:01:37.337 STDOUT terraform:  + updated = (known after apply) 2025-07-23 00:01:37.337788 | orchestrator | 00:01:37.337 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-23 00:01:37.337807 | orchestrator | 00:01:37.337 STDOUT terraform:  + block_device { 2025-07-23 00:01:37.337830 | orchestrator | 00:01:37.337 STDOUT terraform:  + boot_index = 0 2025-07-23 00:01:37.337856 | orchestrator | 00:01:37.337 STDOUT terraform:  + delete_on_termination = false 2025-07-23 00:01:37.337884 | orchestrator | 00:01:37.337 STDOUT terraform:  + destination_type = "volume" 2025-07-23 00:01:37.337912 | orchestrator | 00:01:37.337 STDOUT terraform:  + multiattach = false 2025-07-23 00:01:37.337940 | orchestrator | 00:01:37.337 STDOUT terraform:  + source_type = "volume" 2025-07-23 00:01:37.337978 | orchestrator | 00:01:37.337 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.337989 | orchestrator | 00:01:37.337 STDOUT terraform:  } 2025-07-23 00:01:37.337994 | orchestrator | 00:01:37.337 STDOUT terraform:  + network { 2025-07-23 00:01:37.338039 | orchestrator | 00:01:37.337 STDOUT terraform:  + access_network = false 2025-07-23 00:01:37.338062 | orchestrator | 00:01:37.338 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-23 00:01:37.338091 | orchestrator | 00:01:37.338 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-23 00:01:37.338120 | orchestrator | 00:01:37.338 STDOUT terraform:  + mac = (known after apply) 2025-07-23 00:01:37.338151 | orchestrator | 00:01:37.338 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.338182 | orchestrator | 00:01:37.338 STDOUT terraform:  + port = (known after apply) 2025-07-23 00:01:37.338212 | orchestrator | 00:01:37.338 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.338218 | orchestrator | 00:01:37.338 STDOUT terraform:  } 2025-07-23 00:01:37.338234 | orchestrator | 00:01:37.338 STDOUT terraform:  } 2025-07-23 00:01:37.338276 | orchestrator | 00:01:37.338 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-23 00:01:37.338315 | orchestrator | 00:01:37.338 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-23 00:01:37.338350 | orchestrator | 00:01:37.338 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-23 00:01:37.338383 | orchestrator | 00:01:37.338 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-23 00:01:37.338416 | orchestrator | 00:01:37.338 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-23 00:01:37.338466 | orchestrator | 00:01:37.338 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.338486 | orchestrator | 00:01:37.338 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.338506 | orchestrator | 00:01:37.338 STDOUT terraform:  + config_drive = true 2025-07-23 00:01:37.338541 | orchestrator | 00:01:37.338 STDOUT terraform:  + created = (known after apply) 2025-07-23 00:01:37.338576 | orchestrator | 00:01:37.338 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-23 00:01:37.338604 | orchestrator | 00:01:37.338 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-23 00:01:37.338627 | orchestrator | 00:01:37.338 STDOUT terraform:  + force_delete = false 2025-07-23 00:01:37.338659 | orchestrator | 00:01:37.338 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-23 00:01:37.338694 | orchestrator | 00:01:37.338 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.338727 | orchestrator | 00:01:37.338 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.338761 | orchestrator | 00:01:37.338 STDOUT terraform:  + image_name = (known after apply) 2025-07-23 00:01:37.338784 | orchestrator | 00:01:37.338 STDOUT terraform:  + key_pair = "testbed" 2025-07-23 00:01:37.338857 | orchestrator | 00:01:37.338 STDOUT terraform:  + name = "testbed-node-3" 2025-07-23 00:01:37.338881 | orchestrator | 00:01:37.338 STDOUT terraform:  + power_state = "active" 2025-07-23 00:01:37.338917 | orchestrator | 00:01:37.338 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.338950 | orchestrator | 00:01:37.338 STDOUT terraform:  + security_groups = (known after apply) 2025-07-23 00:01:37.338973 | orchestrator | 00:01:37.338 STDOUT terraform:  + stop_before_destroy = false 2025-07-23 00:01:37.339006 | orchestrator | 00:01:37.338 STDOUT terraform:  + updated = (known after apply) 2025-07-23 00:01:37.339055 | orchestrator | 00:01:37.339 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-23 00:01:37.339077 | orchestrator | 00:01:37.339 STDOUT terraform:  + block_device { 2025-07-23 00:01:37.339096 | orchestrator | 00:01:37.339 STDOUT terraform:  + boot_index = 0 2025-07-23 00:01:37.339123 | orchestrator | 00:01:37.339 STDOUT terraform:  + delete_on_termination = false 2025-07-23 00:01:37.339152 | orchestrator | 00:01:37.339 STDOUT terraform:  + destination_type = "volume" 2025-07-23 00:01:37.339179 | orchestrator | 00:01:37.339 STDOUT terraform:  + multiattach = false 2025-07-23 00:01:37.339207 | orchestrator | 00:01:37.339 STDOUT terraform:  + source_type = "volume" 2025-07-23 00:01:37.339244 | orchestrator | 00:01:37.339 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.339250 | orchestrator | 00:01:37.339 STDOUT terraform:  } 2025-07-23 00:01:37.339267 | orchestrator | 00:01:37.339 STDOUT terraform:  + network { 2025-07-23 00:01:37.339287 | orchestrator | 00:01:37.339 STDOUT terraform:  + access_network = false 2025-07-23 00:01:37.339317 | orchestrator | 00:01:37.339 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-23 00:01:37.339346 | orchestrator | 00:01:37.339 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-23 00:01:37.339376 | orchestrator | 00:01:37.339 STDOUT terraform:  + mac = (known after apply) 2025-07-23 00:01:37.339414 | orchestrator | 00:01:37.339 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.339490 | orchestrator | 00:01:37.339 STDOUT terraform:  + port = (known after apply) 2025-07-23 00:01:37.339522 | orchestrator | 00:01:37.339 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.339536 | orchestrator | 00:01:37.339 STDOUT terraform:  } 2025-07-23 00:01:37.339551 | orchestrator | 00:01:37.339 STDOUT terraform:  } 2025-07-23 00:01:37.339592 | orchestrator | 00:01:37.339 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-23 00:01:37.339632 | orchestrator | 00:01:37.339 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-23 00:01:37.339666 | orchestrator | 00:01:37.339 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-23 00:01:37.339698 | orchestrator | 00:01:37.339 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-23 00:01:37.339733 | orchestrator | 00:01:37.339 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-23 00:01:37.339766 | orchestrator | 00:01:37.339 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.339789 | orchestrator | 00:01:37.339 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.339809 | orchestrator | 00:01:37.339 STDOUT terraform:  + config_drive = true 2025-07-23 00:01:37.339843 | orchestrator | 00:01:37.339 STDOUT terraform:  + created = (known after apply) 2025-07-23 00:01:37.339876 | orchestrator | 00:01:37.339 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-23 00:01:37.339906 | orchestrator | 00:01:37.339 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-23 00:01:37.339929 | orchestrator | 00:01:37.339 STDOUT terraform:  + force_delete = false 2025-07-23 00:01:37.339963 | orchestrator | 00:01:37.339 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-23 00:01:37.339997 | orchestrator | 00:01:37.339 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.340038 | orchestrator | 00:01:37.339 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.346073 | orchestrator | 00:01:37.340 STDOUT terraform:  + image_name = (known after apply) 2025-07-23 00:01:37.346122 | orchestrator | 00:01:37.340 STDOUT terraform:  + key_pair = "testbed" 2025-07-23 00:01:37.346128 | orchestrator | 00:01:37.340 STDOUT terraform:  + name = "testbed-node-4" 2025-07-23 00:01:37.346132 | orchestrator | 00:01:37.340 STDOUT terraform:  + power_state = "active" 2025-07-23 00:01:37.346136 | orchestrator | 00:01:37.340 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346140 | orchestrator | 00:01:37.340 STDOUT terraform:  + security_groups = (known after apply) 2025-07-23 00:01:37.346144 | orchestrator | 00:01:37.340 STDOUT terraform:  + stop_before_destroy = false 2025-07-23 00:01:37.346148 | orchestrator | 00:01:37.340 STDOUT terraform:  + updated = (known after apply) 2025-07-23 00:01:37.346152 | orchestrator | 00:01:37.340 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-23 00:01:37.346156 | orchestrator | 00:01:37.340 STDOUT terraform:  + block_device { 2025-07-23 00:01:37.346160 | orchestrator | 00:01:37.340 STDOUT terraform:  + boot_index = 0 2025-07-23 00:01:37.346164 | orchestrator | 00:01:37.340 STDOUT terraform:  + delete_on_termination = false 2025-07-23 00:01:37.346168 | orchestrator | 00:01:37.340 STDOUT terraform:  + destination_type = "volume" 2025-07-23 00:01:37.346171 | orchestrator | 00:01:37.340 STDOUT terraform:  + multiattach = false 2025-07-23 00:01:37.346175 | orchestrator | 00:01:37.340 STDOUT terraform:  + source_type = "volume" 2025-07-23 00:01:37.346179 | orchestrator | 00:01:37.340 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.346183 | orchestrator | 00:01:37.340 STDOUT terraform:  } 2025-07-23 00:01:37.346186 | orchestrator | 00:01:37.340 STDOUT terraform:  + network { 2025-07-23 00:01:37.346190 | orchestrator | 00:01:37.340 STDOUT terraform:  + access_network = false 2025-07-23 00:01:37.346194 | orchestrator | 00:01:37.340 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-23 00:01:37.346198 | orchestrator | 00:01:37.340 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-23 00:01:37.346201 | orchestrator | 00:01:37.340 STDOUT terraform:  + mac = (known after apply) 2025-07-23 00:01:37.346215 | orchestrator | 00:01:37.340 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.346218 | orchestrator | 00:01:37.340 STDOUT terraform:  + port = (known after apply) 2025-07-23 00:01:37.346222 | orchestrator | 00:01:37.340 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.346226 | orchestrator | 00:01:37.340 STDOUT terraform:  } 2025-07-23 00:01:37.346230 | orchestrator | 00:01:37.340 STDOUT terraform:  } 2025-07-23 00:01:37.346233 | orchestrator | 00:01:37.340 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-23 00:01:37.346238 | orchestrator | 00:01:37.340 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-23 00:01:37.346241 | orchestrator | 00:01:37.340 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-23 00:01:37.346245 | orchestrator | 00:01:37.340 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-23 00:01:37.346249 | orchestrator | 00:01:37.340 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-23 00:01:37.346252 | orchestrator | 00:01:37.340 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.346256 | orchestrator | 00:01:37.340 STDOUT terraform:  + availability_zone = "nova" 2025-07-23 00:01:37.346260 | orchestrator | 00:01:37.340 STDOUT terraform:  + config_drive = true 2025-07-23 00:01:37.346264 | orchestrator | 00:01:37.340 STDOUT terraform:  + created = (known after apply) 2025-07-23 00:01:37.346267 | orchestrator | 00:01:37.340 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-23 00:01:37.346277 | orchestrator | 00:01:37.340 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-23 00:01:37.346292 | orchestrator | 00:01:37.340 STDOUT terraform:  + force_delete = false 2025-07-23 00:01:37.346296 | orchestrator | 00:01:37.340 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-23 00:01:37.346300 | orchestrator | 00:01:37.341 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346304 | orchestrator | 00:01:37.341 STDOUT terraform:  + image_id = (known after apply) 2025-07-23 00:01:37.346307 | orchestrator | 00:01:37.341 STDOUT terraform:  + image_name = (known after apply) 2025-07-23 00:01:37.346311 | orchestrator | 00:01:37.341 STDOUT terraform:  + key_pair = "testbed" 2025-07-23 00:01:37.346315 | orchestrator | 00:01:37.341 STDOUT terraform:  + name = "testbed-node-5" 2025-07-23 00:01:37.346318 | orchestrator | 00:01:37.341 STDOUT terraform:  + power_state = "active" 2025-07-23 00:01:37.346322 | orchestrator | 00:01:37.341 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346326 | orchestrator | 00:01:37.341 STDOUT terraform:  + security_groups = (known after apply) 2025-07-23 00:01:37.346329 | orchestrator | 00:01:37.341 STDOUT terraform:  + stop_before_destroy = false 2025-07-23 00:01:37.346333 | orchestrator | 00:01:37.341 STDOUT terraform:  + updated = (known after apply) 2025-07-23 00:01:37.346337 | orchestrator | 00:01:37.341 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-23 00:01:37.346340 | orchestrator | 00:01:37.341 STDOUT terraform:  + block_device { 2025-07-23 00:01:37.346347 | orchestrator | 00:01:37.341 STDOUT terraform:  + boot_index = 0 2025-07-23 00:01:37.346351 | orchestrator | 00:01:37.341 STDOUT terraform:  + delete_on_termination = false 2025-07-23 00:01:37.346355 | orchestrator | 00:01:37.341 STDOUT terraform:  + destination_type = "volume" 2025-07-23 00:01:37.346358 | orchestrator | 00:01:37.341 STDOUT terraform:  + multiattach = false 2025-07-23 00:01:37.346362 | orchestrator | 00:01:37.341 STDOUT terraform:  + source_type = "volume" 2025-07-23 00:01:37.346366 | orchestrator | 00:01:37.341 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.346369 | orchestrator | 00:01:37.341 STDOUT terraform:  } 2025-07-23 00:01:37.346373 | orchestrator | 00:01:37.341 STDOUT terraform:  + network { 2025-07-23 00:01:37.346377 | orchestrator | 00:01:37.341 STDOUT terraform:  + access_network = false 2025-07-23 00:01:37.346380 | orchestrator | 00:01:37.341 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-23 00:01:37.346384 | orchestrator | 00:01:37.341 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-23 00:01:37.346388 | orchestrator | 00:01:37.341 STDOUT terraform:  + mac = (known after apply) 2025-07-23 00:01:37.346391 | orchestrator | 00:01:37.341 STDOUT terraform:  + name = (known after apply) 2025-07-23 00:01:37.346395 | orchestrator | 00:01:37.341 STDOUT terraform:  + port = (known after apply) 2025-07-23 00:01:37.346399 | orchestrator | 00:01:37.341 STDOUT terraform:  + uuid = (known after apply) 2025-07-23 00:01:37.346402 | orchestrator | 00:01:37.341 STDOUT terraform:  } 2025-07-23 00:01:37.346406 | orchestrator | 00:01:37.341 STDOUT terraform:  } 2025-07-23 00:01:37.346410 | orchestrator | 00:01:37.341 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-23 00:01:37.346413 | orchestrator | 00:01:37.341 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-23 00:01:37.346417 | orchestrator | 00:01:37.341 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-23 00:01:37.346421 | orchestrator | 00:01:37.341 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346425 | orchestrator | 00:01:37.341 STDOUT terraform:  + name = "testbed" 2025-07-23 00:01:37.346429 | orchestrator | 00:01:37.341 STDOUT terraform:  + private_key = (sensitive value) 2025-07-23 00:01:37.346432 | orchestrator | 00:01:37.341 STDOUT terraform:  + public_key = (known after apply) 2025-07-23 00:01:37.346452 | orchestrator | 00:01:37.341 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346460 | orchestrator | 00:01:37.341 STDOUT terraform:  + user_id = (known after apply) 2025-07-23 00:01:37.346463 | orchestrator | 00:01:37.341 STDOUT terraform:  } 2025-07-23 00:01:37.346467 | orchestrator | 00:01:37.341 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-23 00:01:37.346472 | orchestrator | 00:01:37.342 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346475 | orchestrator | 00:01:37.342 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346479 | orchestrator | 00:01:37.342 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346486 | orchestrator | 00:01:37.342 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346490 | orchestrator | 00:01:37.342 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346493 | orchestrator | 00:01:37.342 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346497 | orchestrator | 00:01:37.342 STDOUT terraform:  } 2025-07-23 00:01:37.346501 | orchestrator | 00:01:37.342 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-23 00:01:37.346504 | orchestrator | 00:01:37.342 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346508 | orchestrator | 00:01:37.342 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346512 | orchestrator | 00:01:37.342 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346516 | orchestrator | 00:01:37.342 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346519 | orchestrator | 00:01:37.342 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346533 | orchestrator | 00:01:37.342 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346537 | orchestrator | 00:01:37.342 STDOUT terraform:  } 2025-07-23 00:01:37.346541 | orchestrator | 00:01:37.342 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-23 00:01:37.346545 | orchestrator | 00:01:37.342 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346548 | orchestrator | 00:01:37.342 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346552 | orchestrator | 00:01:37.342 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346556 | orchestrator | 00:01:37.342 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346560 | orchestrator | 00:01:37.342 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346563 | orchestrator | 00:01:37.342 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346567 | orchestrator | 00:01:37.342 STDOUT terraform:  } 2025-07-23 00:01:37.346571 | orchestrator | 00:01:37.342 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-23 00:01:37.346574 | orchestrator | 00:01:37.342 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346578 | orchestrator | 00:01:37.342 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346582 | orchestrator | 00:01:37.342 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346586 | orchestrator | 00:01:37.342 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346589 | orchestrator | 00:01:37.342 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346593 | orchestrator | 00:01:37.342 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346597 | orchestrator | 00:01:37.342 STDOUT terraform:  } 2025-07-23 00:01:37.346600 | orchestrator | 00:01:37.342 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-23 00:01:37.346607 | orchestrator | 00:01:37.342 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346613 | orchestrator | 00:01:37.342 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346620 | orchestrator | 00:01:37.342 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346624 | orchestrator | 00:01:37.342 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346628 | orchestrator | 00:01:37.342 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346632 | orchestrator | 00:01:37.343 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346635 | orchestrator | 00:01:37.343 STDOUT terraform:  } 2025-07-23 00:01:37.346639 | orchestrator | 00:01:37.343 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-23 00:01:37.346643 | orchestrator | 00:01:37.343 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346647 | orchestrator | 00:01:37.343 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346650 | orchestrator | 00:01:37.343 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346654 | orchestrator | 00:01:37.343 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346658 | orchestrator | 00:01:37.343 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346661 | orchestrator | 00:01:37.343 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346665 | orchestrator | 00:01:37.343 STDOUT terraform:  } 2025-07-23 00:01:37.346669 | orchestrator | 00:01:37.343 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-23 00:01:37.346673 | orchestrator | 00:01:37.343 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346676 | orchestrator | 00:01:37.343 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346680 | orchestrator | 00:01:37.343 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346684 | orchestrator | 00:01:37.343 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346687 | orchestrator | 00:01:37.343 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346691 | orchestrator | 00:01:37.343 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346695 | orchestrator | 00:01:37.343 STDOUT terraform:  } 2025-07-23 00:01:37.346699 | orchestrator | 00:01:37.343 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-23 00:01:37.346702 | orchestrator | 00:01:37.343 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346706 | orchestrator | 00:01:37.343 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346710 | orchestrator | 00:01:37.343 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346713 | orchestrator | 00:01:37.343 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346717 | orchestrator | 00:01:37.343 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346724 | orchestrator | 00:01:37.343 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346728 | orchestrator | 00:01:37.343 STDOUT terraform:  } 2025-07-23 00:01:37.346732 | orchestrator | 00:01:37.343 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-23 00:01:37.346735 | orchestrator | 00:01:37.343 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-23 00:01:37.346739 | orchestrator | 00:01:37.343 STDOUT terraform:  + device = (known after apply) 2025-07-23 00:01:37.346743 | orchestrator | 00:01:37.343 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346746 | orchestrator | 00:01:37.343 STDOUT terraform:  + instance_id = (known after apply) 2025-07-23 00:01:37.346750 | orchestrator | 00:01:37.343 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346754 | orchestrator | 00:01:37.343 STDOUT terraform:  + volume_id = (known after apply) 2025-07-23 00:01:37.346758 | orchestrator | 00:01:37.343 STDOUT terraform:  } 2025-07-23 00:01:37.346766 | orchestrator | 00:01:37.343 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-23 00:01:37.346771 | orchestrator | 00:01:37.343 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-23 00:01:37.346774 | orchestrator | 00:01:37.343 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-23 00:01:37.346778 | orchestrator | 00:01:37.343 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-23 00:01:37.346782 | orchestrator | 00:01:37.344 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346786 | orchestrator | 00:01:37.344 STDOUT terraform:  + port_id = (known after apply) 2025-07-23 00:01:37.346789 | orchestrator | 00:01:37.344 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346793 | orchestrator | 00:01:37.344 STDOUT terraform:  } 2025-07-23 00:01:37.346797 | orchestrator | 00:01:37.344 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-23 00:01:37.346800 | orchestrator | 00:01:37.344 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-23 00:01:37.346804 | orchestrator | 00:01:37.344 STDOUT terraform:  + address = (known after apply) 2025-07-23 00:01:37.346808 | orchestrator | 00:01:37.344 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.346811 | orchestrator | 00:01:37.344 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-23 00:01:37.346815 | orchestrator | 00:01:37.344 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.346819 | orchestrator | 00:01:37.344 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-23 00:01:37.346823 | orchestrator | 00:01:37.344 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346826 | orchestrator | 00:01:37.344 STDOUT terraform:  + pool = "public" 2025-07-23 00:01:37.346830 | orchestrator | 00:01:37.344 STDOUT terraform:  + port_id = (known after apply) 2025-07-23 00:01:37.346834 | orchestrator | 00:01:37.344 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346838 | orchestrator | 00:01:37.344 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.346848 | orchestrator | 00:01:37.344 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.346851 | orchestrator | 00:01:37.344 STDOUT terraform:  } 2025-07-23 00:01:37.346855 | orchestrator | 00:01:37.344 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-23 00:01:37.346859 | orchestrator | 00:01:37.344 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-23 00:01:37.346863 | orchestrator | 00:01:37.344 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.346866 | orchestrator | 00:01:37.344 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.346870 | orchestrator | 00:01:37.344 STDOUT terraform:  + availability_zone_hints = [ 2025-07-23 00:01:37.346874 | orchestrator | 00:01:37.344 STDOUT terraform:  + "nova", 2025-07-23 00:01:37.346878 | orchestrator | 00:01:37.344 STDOUT terraform:  ] 2025-07-23 00:01:37.346881 | orchestrator | 00:01:37.344 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-23 00:01:37.346885 | orchestrator | 00:01:37.344 STDOUT terraform:  + external = (known after apply) 2025-07-23 00:01:37.346889 | orchestrator | 00:01:37.344 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346892 | orchestrator | 00:01:37.344 STDOUT terraform:  + mtu = (known after apply) 2025-07-23 00:01:37.346896 | orchestrator | 00:01:37.344 STDOUT terraform:  + name = "net-testbed-management" 2025-07-23 00:01:37.346900 | orchestrator | 00:01:37.344 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.346903 | orchestrator | 00:01:37.344 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.346907 | orchestrator | 00:01:37.344 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346913 | orchestrator | 00:01:37.344 STDOUT terraform:  + shared = (known after apply) 2025-07-23 00:01:37.346917 | orchestrator | 00:01:37.344 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.346921 | orchestrator | 00:01:37.344 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-23 00:01:37.346925 | orchestrator | 00:01:37.344 STDOUT terraform:  + segments (known after apply) 2025-07-23 00:01:37.346928 | orchestrator | 00:01:37.344 STDOUT terraform:  } 2025-07-23 00:01:37.346932 | orchestrator | 00:01:37.344 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-23 00:01:37.346936 | orchestrator | 00:01:37.344 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-23 00:01:37.346940 | orchestrator | 00:01:37.345 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.346943 | orchestrator | 00:01:37.345 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-23 00:01:37.346947 | orchestrator | 00:01:37.345 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-23 00:01:37.346951 | orchestrator | 00:01:37.345 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.346956 | orchestrator | 00:01:37.345 STDOUT terraform:  + device_id = (known after apply) 2025-07-23 00:01:37.346964 | orchestrator | 00:01:37.345 STDOUT terraform:  + device_owner = (known after apply) 2025-07-23 00:01:37.346967 | orchestrator | 00:01:37.346 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-23 00:01:37.346971 | orchestrator | 00:01:37.346 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.346975 | orchestrator | 00:01:37.346 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.346979 | orchestrator | 00:01:37.346 STDOUT terraform:  + mac_address = (known after apply) 2025-07-23 00:01:37.346982 | orchestrator | 00:01:37.346 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.346986 | orchestrator | 00:01:37.346 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.346990 | orchestrator | 00:01:37.346 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.346993 | orchestrator | 00:01:37.346 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.346997 | orchestrator | 00:01:37.346 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-23 00:01:37.347001 | orchestrator | 00:01:37.346 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.347004 | orchestrator | 00:01:37.346 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.347008 | orchestrator | 00:01:37.346 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-23 00:01:37.347012 | orchestrator | 00:01:37.346 STDOUT terraform:  } 2025-07-23 00:01:37.347016 | orchestrator | 00:01:37.346 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.347019 | orchestrator | 00:01:37.346 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-23 00:01:37.347023 | orchestrator | 00:01:37.346 STDOUT terraform:  } 2025-07-23 00:01:37.347027 | orchestrator | 00:01:37.346 STDOUT terraform:  + binding (known after apply) 2025-07-23 00:01:37.347031 | orchestrator | 00:01:37.346 STDOUT terraform:  + fixed_ip { 2025-07-23 00:01:37.347034 | orchestrator | 00:01:37.346 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-23 00:01:37.347038 | orchestrator | 00:01:37.346 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.347042 | orchestrator | 00:01:37.346 STDOUT terraform:  } 2025-07-23 00:01:37.347046 | orchestrator | 00:01:37.346 STDOUT terraform:  } 2025-07-23 00:01:37.347049 | orchestrator | 00:01:37.346 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-23 00:01:37.347053 | orchestrator | 00:01:37.346 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-23 00:01:37.347062 | orchestrator | 00:01:37.346 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.347066 | orchestrator | 00:01:37.346 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-23 00:01:37.347069 | orchestrator | 00:01:37.346 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-23 00:01:37.347073 | orchestrator | 00:01:37.346 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.347077 | orchestrator | 00:01:37.346 STDOUT terraform:  + device_id = (known after apply) 2025-07-23 00:01:37.347083 | orchestrator | 00:01:37.346 STDOUT terraform:  + device_owner = (known after apply) 2025-07-23 00:01:37.347087 | orchestrator | 00:01:37.346 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-23 00:01:37.347090 | orchestrator | 00:01:37.346 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.347094 | orchestrator | 00:01:37.346 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.347098 | orchestrator | 00:01:37.346 STDOUT terraform:  + mac_address = (known after apply) 2025-07-23 00:01:37.347101 | orchestrator | 00:01:37.346 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.347105 | orchestrator | 00:01:37.347 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.347111 | orchestrator | 00:01:37.347 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.347115 | orchestrator | 00:01:37.347 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.347197 | orchestrator | 00:01:37.347 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-23 00:01:37.347205 | orchestrator | 00:01:37.347 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.347209 | orchestrator | 00:01:37.347 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.347214 | orchestrator | 00:01:37.347 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-23 00:01:37.347218 | orchestrator | 00:01:37.347 STDOUT terraform:  } 2025-07-23 00:01:37.347224 | orchestrator | 00:01:37.347 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.347252 | orchestrator | 00:01:37.347 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-23 00:01:37.347259 | orchestrator | 00:01:37.347 STDOUT terraform:  } 2025-07-23 00:01:37.347329 | orchestrator | 00:01:37.347 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.347338 | orchestrator | 00:01:37.347 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-23 00:01:37.347342 | orchestrator | 00:01:37.347 STDOUT terraform:  } 2025-07-23 00:01:37.347347 | orchestrator | 00:01:37.347 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.347351 | orchestrator | 00:01:37.347 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-23 00:01:37.347356 | orchestrator | 00:01:37.347 STDOUT terraform:  } 2025-07-23 00:01:37.347412 | orchestrator | 00:01:37.347 STDOUT terraform:  + binding (known after apply) 2025-07-23 00:01:37.347417 | orchestrator | 00:01:37.347 STDOUT terraform:  + fixed_ip { 2025-07-23 00:01:37.347421 | orchestrator | 00:01:37.347 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-23 00:01:37.347428 | orchestrator | 00:01:37.347 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.347481 | orchestrator | 00:01:37.347 STDOUT terraform:  } 2025-07-23 00:01:37.347486 | orchestrator | 00:01:37.347 STDOUT terraform:  } 2025-07-23 00:01:37.347525 | orchestrator | 00:01:37.347 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-23 00:01:37.347716 | orchestrator | 00:01:37.347 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-23 00:01:37.347727 | orchestrator | 00:01:37.347 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.347731 | orchestrator | 00:01:37.347 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-23 00:01:37.347734 | orchestrator | 00:01:37.347 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-23 00:01:37.347738 | orchestrator | 00:01:37.347 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.347742 | orchestrator | 00:01:37.347 STDOUT terraform:  + device_id = (known after apply) 2025-07-23 00:01:37.347747 | orchestrator | 00:01:37.347 STDOUT terraform:  + device_owner = (known after apply) 2025-07-23 00:01:37.347778 | orchestrator | 00:01:37.347 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-23 00:01:37.347804 | orchestrator | 00:01:37.347 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.347848 | orchestrator | 00:01:37.347 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.347871 | orchestrator | 00:01:37.347 STDOUT terraform:  + mac_address = (known after apply) 2025-07-23 00:01:37.347941 | orchestrator | 00:01:37.347 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.347949 | orchestrator | 00:01:37.347 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.347980 | orchestrator | 00:01:37.347 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.347996 | orchestrator | 00:01:37.347 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.348035 | orchestrator | 00:01:37.347 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-23 00:01:37.348068 | orchestrator | 00:01:37.348 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.348140 | orchestrator | 00:01:37.348 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.348145 | orchestrator | 00:01:37.348 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-23 00:01:37.348149 | orchestrator | 00:01:37.348 STDOUT terraform:  } 2025-07-23 00:01:37.348153 | orchestrator | 00:01:37.348 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.348158 | orchestrator | 00:01:37.348 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-23 00:01:37.348185 | orchestrator | 00:01:37.348 STDOUT terraform:  } 2025-07-23 00:01:37.348191 | orchestrator | 00:01:37.348 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.348212 | orchestrator | 00:01:37.348 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-23 00:01:37.348219 | orchestrator | 00:01:37.348 STDOUT terraform:  } 2025-07-23 00:01:37.348235 | orchestrator | 00:01:37.348 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.348263 | orchestrator | 00:01:37.348 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-23 00:01:37.348269 | orchestrator | 00:01:37.348 STDOUT terraform:  } 2025-07-23 00:01:37.348310 | orchestrator | 00:01:37.348 STDOUT terraform:  + binding (known after apply) 2025-07-23 00:01:37.348315 | orchestrator | 00:01:37.348 STDOUT terraform:  + fixed_ip { 2025-07-23 00:01:37.348324 | orchestrator | 00:01:37.348 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-23 00:01:37.348353 | orchestrator | 00:01:37.348 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.348360 | orchestrator | 00:01:37.348 STDOUT terraform:  } 2025-07-23 00:01:37.348366 | orchestrator | 00:01:37.348 STDOUT terraform:  } 2025-07-23 00:01:37.348421 | orchestrator | 00:01:37.348 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-23 00:01:37.348489 | orchestrator | 00:01:37.348 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-23 00:01:37.348497 | orchestrator | 00:01:37.348 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.348546 | orchestrator | 00:01:37.348 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-23 00:01:37.348573 | orchestrator | 00:01:37.348 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-23 00:01:37.348605 | orchestrator | 00:01:37.348 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.348636 | orchestrator | 00:01:37.348 STDOUT terraform:  + device_id = (known after apply) 2025-07-23 00:01:37.348671 | orchestrator | 00:01:37.348 STDOUT terraform:  + device_owner = (known after apply) 2025-07-23 00:01:37.348704 | orchestrator | 00:01:37.348 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-23 00:01:37.348734 | orchestrator | 00:01:37.348 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.348797 | orchestrator | 00:01:37.348 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.348804 | orchestrator | 00:01:37.348 STDOUT terraform:  + mac_address = (known after apply) 2025-07-23 00:01:37.348841 | orchestrator | 00:01:37.348 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.348864 | orchestrator | 00:01:37.348 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.348906 | orchestrator | 00:01:37.348 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.348932 | orchestrator | 00:01:37.348 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.348965 | orchestrator | 00:01:37.348 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-23 00:01:37.348998 | orchestrator | 00:01:37.348 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.349019 | orchestrator | 00:01:37.348 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.349086 | orchestrator | 00:01:37.349 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-23 00:01:37.349091 | orchestrator | 00:01:37.349 STDOUT terraform:  } 2025-07-23 00:01:37.349095 | orchestrator | 00:01:37.349 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.349100 | orchestrator | 00:01:37.349 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-23 00:01:37.349104 | orchestrator | 00:01:37.349 STDOUT terraform:  } 2025-07-23 00:01:37.349110 | orchestrator | 00:01:37.349 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.349143 | orchestrator | 00:01:37.349 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-23 00:01:37.349153 | orchestrator | 00:01:37.349 STDOUT terraform:  } 2025-07-23 00:01:37.349169 | orchestrator | 00:01:37.349 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.349234 | orchestrator | 00:01:37.349 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-23 00:01:37.349239 | orchestrator | 00:01:37.349 STDOUT terraform:  } 2025-07-23 00:01:37.349243 | orchestrator | 00:01:37.349 STDOUT terraform:  + binding (known after apply) 2025-07-23 00:01:37.349247 | orchestrator | 00:01:37.349 STDOUT terraform:  + fixed_ip { 2025-07-23 00:01:37.349253 | orchestrator | 00:01:37.349 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-23 00:01:37.349277 | orchestrator | 00:01:37.349 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.349365 | orchestrator | 00:01:37.349 STDOUT terraform:  } 2025-07-23 00:01:37.349371 | orchestrator | 00:01:37.349 STDOUT terraform:  } 2025-07-23 00:01:37.349375 | orchestrator | 00:01:37.349 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-23 00:01:37.349515 | orchestrator | 00:01:37.349 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-23 00:01:37.349635 | orchestrator | 00:01:37.349 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.349645 | orchestrator | 00:01:37.349 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-23 00:01:37.349672 | orchestrator | 00:01:37.349 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-23 00:01:37.349709 | orchestrator | 00:01:37.349 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.349753 | orchestrator | 00:01:37.349 STDOUT terraform:  + device_id = (known after apply) 2025-07-23 00:01:37.349826 | orchestrator | 00:01:37.349 STDOUT terraform:  + device_owner = (known after apply) 2025-07-23 00:01:37.349834 | orchestrator | 00:01:37.349 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-23 00:01:37.349850 | orchestrator | 00:01:37.349 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.349898 | orchestrator | 00:01:37.349 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.349935 | orchestrator | 00:01:37.349 STDOUT terraform:  + mac_address = (known after apply) 2025-07-23 00:01:37.349968 | orchestrator | 00:01:37.349 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.350059 | orchestrator | 00:01:37.349 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.350065 | orchestrator | 00:01:37.349 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.350101 | orchestrator | 00:01:37.350 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.350107 | orchestrator | 00:01:37.350 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-23 00:01:37.350188 | orchestrator | 00:01:37.350 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.350194 | orchestrator | 00:01:37.350 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.350210 | orchestrator | 00:01:37.350 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-23 00:01:37.350220 | orchestrator | 00:01:37.350 STDOUT terraform:  } 2025-07-23 00:01:37.350248 | orchestrator | 00:01:37.350 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.350282 | orchestrator | 00:01:37.350 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-23 00:01:37.350288 | orchestrator | 00:01:37.350 STDOUT terraform:  } 2025-07-23 00:01:37.350344 | orchestrator | 00:01:37.350 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.350352 | orchestrator | 00:01:37.350 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-23 00:01:37.350356 | orchestrator | 00:01:37.350 STDOUT terraform:  } 2025-07-23 00:01:37.350361 | orchestrator | 00:01:37.350 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.350422 | orchestrator | 00:01:37.350 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-23 00:01:37.350430 | orchestrator | 00:01:37.350 STDOUT terraform:  } 2025-07-23 00:01:37.350434 | orchestrator | 00:01:37.350 STDOUT terraform:  + binding (known after apply) 2025-07-23 00:01:37.350439 | orchestrator | 00:01:37.350 STDOUT terraform:  + fixed_ip { 2025-07-23 00:01:37.350505 | orchestrator | 00:01:37.350 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-23 00:01:37.350513 | orchestrator | 00:01:37.350 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.350519 | orchestrator | 00:01:37.350 STDOUT terraform:  } 2025-07-23 00:01:37.350525 | orchestrator | 00:01:37.350 STDOUT terraform:  } 2025-07-23 00:01:37.350611 | orchestrator | 00:01:37.350 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-23 00:01:37.350616 | orchestrator | 00:01:37.350 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-23 00:01:37.350648 | orchestrator | 00:01:37.350 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.350706 | orchestrator | 00:01:37.350 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-23 00:01:37.350713 | orchestrator | 00:01:37.350 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-23 00:01:37.350748 | orchestrator | 00:01:37.350 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.350807 | orchestrator | 00:01:37.350 STDOUT terraform:  + device_id = (known after apply) 2025-07-23 00:01:37.350838 | orchestrator | 00:01:37.350 STDOUT terraform:  + device_owner = (known after apply) 2025-07-23 00:01:37.350865 | orchestrator | 00:01:37.350 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-23 00:01:37.350908 | orchestrator | 00:01:37.350 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.350936 | orchestrator | 00:01:37.350 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.350973 | orchestrator | 00:01:37.350 STDOUT terraform:  + mac_address = (known after apply) 2025-07-23 00:01:37.350997 | orchestrator | 00:01:37.350 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.351034 | orchestrator | 00:01:37.350 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.351070 | orchestrator | 00:01:37.351 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.351104 | orchestrator | 00:01:37.351 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.351153 | orchestrator | 00:01:37.351 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-23 00:01:37.351217 | orchestrator | 00:01:37.351 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.351222 | orchestrator | 00:01:37.351 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.351242 | orchestrator | 00:01:37.351 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-23 00:01:37.351248 | orchestrator | 00:01:37.351 STDOUT terraform:  } 2025-07-23 00:01:37.351284 | orchestrator | 00:01:37.351 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.351331 | orchestrator | 00:01:37.351 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-23 00:01:37.351340 | orchestrator | 00:01:37.351 STDOUT terraform:  } 2025-07-23 00:01:37.351345 | orchestrator | 00:01:37.351 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.351373 | orchestrator | 00:01:37.351 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-23 00:01:37.351379 | orchestrator | 00:01:37.351 STDOUT terraform:  } 2025-07-23 00:01:37.351426 | orchestrator | 00:01:37.351 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.351469 | orchestrator | 00:01:37.351 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-23 00:01:37.351475 | orchestrator | 00:01:37.351 STDOUT terraform:  } 2025-07-23 00:01:37.351554 | orchestrator | 00:01:37.351 STDOUT terraform:  + binding (known after apply) 2025-07-23 00:01:37.351559 | orchestrator | 00:01:37.351 STDOUT terraform:  + fixed_ip { 2025-07-23 00:01:37.351563 | orchestrator | 00:01:37.351 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-23 00:01:37.351567 | orchestrator | 00:01:37.351 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.351570 | orchestrator | 00:01:37.351 STDOUT terraform:  } 2025-07-23 00:01:37.351575 | orchestrator | 00:01:37.351 STDOUT terraform:  } 2025-07-23 00:01:37.351605 | orchestrator | 00:01:37.351 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-23 00:01:37.351686 | orchestrator | 00:01:37.351 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-23 00:01:37.351693 | orchestrator | 00:01:37.351 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.351755 | orchestrator | 00:01:37.351 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-23 00:01:37.351764 | orchestrator | 00:01:37.351 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-23 00:01:37.351802 | orchestrator | 00:01:37.351 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.351839 | orchestrator | 00:01:37.351 STDOUT terraform:  + device_id = (known after apply) 2025-07-23 00:01:37.351868 | orchestrator | 00:01:37.351 STDOUT terraform:  + device_owner = (known after apply) 2025-07-23 00:01:37.351905 | orchestrator | 00:01:37.351 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-23 00:01:37.351932 | orchestrator | 00:01:37.351 STDOUT terraform:  + dns_name = (known after apply) 2025-07-23 00:01:37.351972 | orchestrator | 00:01:37.351 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.352045 | orchestrator | 00:01:37.351 STDOUT terraform:  + mac_address = (known after apply) 2025-07-23 00:01:37.352051 | orchestrator | 00:01:37.351 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.352070 | orchestrator | 00:01:37.352 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-23 00:01:37.352107 | orchestrator | 00:01:37.352 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-23 00:01:37.352146 | orchestrator | 00:01:37.352 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.352199 | orchestrator | 00:01:37.352 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-23 00:01:37.352205 | orchestrator | 00:01:37.352 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.352235 | orchestrator | 00:01:37.352 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.352242 | orchestrator | 00:01:37.352 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-23 00:01:37.352261 | orchestrator | 00:01:37.352 STDOUT terraform:  } 2025-07-23 00:01:37.352284 | orchestrator | 00:01:37.352 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.352301 | orchestrator | 00:01:37.352 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-23 00:01:37.352349 | orchestrator | 00:01:37.352 STDOUT terraform:  } 2025-07-23 00:01:37.352357 | orchestrator | 00:01:37.352 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.352362 | orchestrator | 00:01:37.352 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-23 00:01:37.352366 | orchestrator | 00:01:37.352 STDOUT terraform:  } 2025-07-23 00:01:37.352382 | orchestrator | 00:01:37.352 STDOUT terraform:  + allowed_address_pairs { 2025-07-23 00:01:37.352426 | orchestrator | 00:01:37.352 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-23 00:01:37.352431 | orchestrator | 00:01:37.352 STDOUT terraform:  } 2025-07-23 00:01:37.352436 | orchestrator | 00:01:37.352 STDOUT terraform:  + binding (known after apply) 2025-07-23 00:01:37.352476 | orchestrator | 00:01:37.352 STDOUT terraform:  + fixed_ip { 2025-07-23 00:01:37.352483 | orchestrator | 00:01:37.352 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-23 00:01:37.352524 | orchestrator | 00:01:37.352 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.352532 | orchestrator | 00:01:37.352 STDOUT terraform:  } 2025-07-23 00:01:37.352537 | orchestrator | 00:01:37.352 STDOUT terraform:  } 2025-07-23 00:01:37.352577 | orchestrator | 00:01:37.352 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-23 00:01:37.352622 | orchestrator | 00:01:37.352 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-23 00:01:37.352666 | orchestrator | 00:01:37.352 STDOUT terraform:  + force_destroy = false 2025-07-23 00:01:37.352673 | orchestrator | 00:01:37.352 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.352682 | orchestrator | 00:01:37.352 STDOUT terraform:  + port_id = (known after apply) 2025-07-23 00:01:37.352725 | orchestrator | 00:01:37.352 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.352775 | orchestrator | 00:01:37.352 STDOUT terraform:  + router_id = (known after apply) 2025-07-23 00:01:37.352780 | orchestrator | 00:01:37.352 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-23 00:01:37.352785 | orchestrator | 00:01:37.352 STDOUT terraform:  } 2025-07-23 00:01:37.352821 | orchestrator | 00:01:37.352 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-23 00:01:37.352846 | orchestrator | 00:01:37.352 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-23 00:01:37.352886 | orchestrator | 00:01:37.352 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-23 00:01:37.352927 | orchestrator | 00:01:37.352 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.352936 | orchestrator | 00:01:37.352 STDOUT terraform:  + availability_zone_hints = [ 2025-07-23 00:01:37.352943 | orchestrator | 00:01:37.352 STDOUT terraform:  + "nova", 2025-07-23 00:01:37.353039 | orchestrator | 00:01:37.352 STDOUT terraform:  ] 2025-07-23 00:01:37.353044 | orchestrator | 00:01:37.352 STDOUT terraform:  + distributed = (known after apply) 2025-07-23 00:01:37.353048 | orchestrator | 00:01:37.352 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-23 00:01:37.353065 | orchestrator | 00:01:37.353 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-23 00:01:37.353103 | orchestrator | 00:01:37.353 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-23 00:01:37.353134 | orchestrator | 00:01:37.353 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.353184 | orchestrator | 00:01:37.353 STDOUT terraform:  + name = "testbed" 2025-07-23 00:01:37.353193 | orchestrator | 00:01:37.353 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.353227 | orchestrator | 00:01:37.353 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.353315 | orchestrator | 00:01:37.353 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-23 00:01:37.353324 | orchestrator | 00:01:37.353 STDOUT terraform:  } 2025-07-23 00:01:37.353328 | orchestrator | 00:01:37.353 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-23 00:01:37.353374 | orchestrator | 00:01:37.353 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-23 00:01:37.353407 | orchestrator | 00:01:37.353 STDOUT terraform:  + description = "ssh" 2025-07-23 00:01:37.353414 | orchestrator | 00:01:37.353 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.353469 | orchestrator | 00:01:37.353 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.353511 | orchestrator | 00:01:37.353 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.353518 | orchestrator | 00:01:37.353 STDOUT terraform:  + port_range_max = 22 2025-07-23 00:01:37.353545 | orchestrator | 00:01:37.353 STDOUT terraform:  + port_range_min = 22 2025-07-23 00:01:37.353568 | orchestrator | 00:01:37.353 STDOUT terraform:  + protocol = "tcp" 2025-07-23 00:01:37.353606 | orchestrator | 00:01:37.353 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.353636 | orchestrator | 00:01:37.353 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.353674 | orchestrator | 00:01:37.353 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.353701 | orchestrator | 00:01:37.353 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-23 00:01:37.353734 | orchestrator | 00:01:37.353 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.353765 | orchestrator | 00:01:37.353 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.353771 | orchestrator | 00:01:37.353 STDOUT terraform:  } 2025-07-23 00:01:37.353826 | orchestrator | 00:01:37.353 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-23 00:01:37.353880 | orchestrator | 00:01:37.353 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-23 00:01:37.353901 | orchestrator | 00:01:37.353 STDOUT terraform:  + description = "wireguard" 2025-07-23 00:01:37.353934 | orchestrator | 00:01:37.353 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.353962 | orchestrator | 00:01:37.353 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.353993 | orchestrator | 00:01:37.353 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.354035 | orchestrator | 00:01:37.353 STDOUT terraform:  + port_range_max = 51820 2025-07-23 00:01:37.358275 | orchestrator | 00:01:37.354 STDOUT terraform:  + port_range_min = 51820 2025-07-23 00:01:37.358330 | orchestrator | 00:01:37.358 STDOUT terraform:  + protocol = "udp" 2025-07-23 00:01:37.358336 | orchestrator | 00:01:37.358 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.358340 | orchestrator | 00:01:37.358 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.358349 | orchestrator | 00:01:37.358 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.358353 | orchestrator | 00:01:37.358 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-23 00:01:37.358364 | orchestrator | 00:01:37.358 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.358368 | orchestrator | 00:01:37.358 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.358373 | orchestrator | 00:01:37.358 STDOUT terraform:  } 2025-07-23 00:01:37.358415 | orchestrator | 00:01:37.358 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-23 00:01:37.358501 | orchestrator | 00:01:37.358 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-23 00:01:37.358532 | orchestrator | 00:01:37.358 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.358559 | orchestrator | 00:01:37.358 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.358598 | orchestrator | 00:01:37.358 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.358624 | orchestrator | 00:01:37.358 STDOUT terraform:  + protocol = "tcp" 2025-07-23 00:01:37.358663 | orchestrator | 00:01:37.358 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.358696 | orchestrator | 00:01:37.358 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.358730 | orchestrator | 00:01:37.358 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.358764 | orchestrator | 00:01:37.358 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-23 00:01:37.358798 | orchestrator | 00:01:37.358 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.358833 | orchestrator | 00:01:37.358 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.358847 | orchestrator | 00:01:37.358 STDOUT terraform:  } 2025-07-23 00:01:37.358897 | orchestrator | 00:01:37.358 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-23 00:01:37.358947 | orchestrator | 00:01:37.358 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-23 00:01:37.358977 | orchestrator | 00:01:37.358 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.359001 | orchestrator | 00:01:37.358 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.359037 | orchestrator | 00:01:37.358 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.359063 | orchestrator | 00:01:37.359 STDOUT terraform:  + protocol = "udp" 2025-07-23 00:01:37.359102 | orchestrator | 00:01:37.359 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.359135 | orchestrator | 00:01:37.359 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.359170 | orchestrator | 00:01:37.359 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.359204 | orchestrator | 00:01:37.359 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-23 00:01:37.359238 | orchestrator | 00:01:37.359 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.359273 | orchestrator | 00:01:37.359 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.359279 | orchestrator | 00:01:37.359 STDOUT terraform:  } 2025-07-23 00:01:37.359332 | orchestrator | 00:01:37.359 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-23 00:01:37.359382 | orchestrator | 00:01:37.359 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-23 00:01:37.359410 | orchestrator | 00:01:37.359 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.359464 | orchestrator | 00:01:37.359 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.359503 | orchestrator | 00:01:37.359 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.359529 | orchestrator | 00:01:37.359 STDOUT terraform:  + protocol = "icmp" 2025-07-23 00:01:37.359566 | orchestrator | 00:01:37.359 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.362184 | orchestrator | 00:01:37.362 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.362216 | orchestrator | 00:01:37.362 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.362221 | orchestrator | 00:01:37.362 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-23 00:01:37.362279 | orchestrator | 00:01:37.362 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.362287 | orchestrator | 00:01:37.362 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.362293 | orchestrator | 00:01:37.362 STDOUT terraform:  } 2025-07-23 00:01:37.362366 | orchestrator | 00:01:37.362 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-23 00:01:37.362399 | orchestrator | 00:01:37.362 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-23 00:01:37.362428 | orchestrator | 00:01:37.362 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.362470 | orchestrator | 00:01:37.362 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.362538 | orchestrator | 00:01:37.362 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.362545 | orchestrator | 00:01:37.362 STDOUT terraform:  + protocol = "tcp" 2025-07-23 00:01:37.362584 | orchestrator | 00:01:37.362 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.362620 | orchestrator | 00:01:37.362 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.362656 | orchestrator | 00:01:37.362 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.362685 | orchestrator | 00:01:37.362 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-23 00:01:37.362720 | orchestrator | 00:01:37.362 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.362755 | orchestrator | 00:01:37.362 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.362762 | orchestrator | 00:01:37.362 STDOUT terraform:  } 2025-07-23 00:01:37.362813 | orchestrator | 00:01:37.362 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-23 00:01:37.362861 | orchestrator | 00:01:37.362 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-23 00:01:37.362891 | orchestrator | 00:01:37.362 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.362917 | orchestrator | 00:01:37.362 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.362953 | orchestrator | 00:01:37.362 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.362977 | orchestrator | 00:01:37.362 STDOUT terraform:  + protocol = "udp" 2025-07-23 00:01:37.363012 | orchestrator | 00:01:37.362 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.363651 | orchestrator | 00:01:37.363 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.363727 | orchestrator | 00:01:37.363 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.363741 | orchestrator | 00:01:37.363 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-23 00:01:37.363792 | orchestrator | 00:01:37.363 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.363821 | orchestrator | 00:01:37.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.363828 | orchestrator | 00:01:37.363 STDOUT terraform:  } 2025-07-23 00:01:37.363893 | orchestrator | 00:01:37.363 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-23 00:01:37.363938 | orchestrator | 00:01:37.363 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-23 00:01:37.363966 | orchestrator | 00:01:37.363 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.363994 | orchestrator | 00:01:37.363 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.364031 | orchestrator | 00:01:37.363 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.364056 | orchestrator | 00:01:37.364 STDOUT terraform:  + protocol = "icmp" 2025-07-23 00:01:37.364093 | orchestrator | 00:01:37.364 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.364128 | orchestrator | 00:01:37.364 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.364163 | orchestrator | 00:01:37.364 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.364191 | orchestrator | 00:01:37.364 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-23 00:01:37.364227 | orchestrator | 00:01:37.364 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.364262 | orchestrator | 00:01:37.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.364268 | orchestrator | 00:01:37.364 STDOUT terraform:  } 2025-07-23 00:01:37.364321 | orchestrator | 00:01:37.364 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-23 00:01:37.364368 | orchestrator | 00:01:37.364 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-23 00:01:37.364391 | orchestrator | 00:01:37.364 STDOUT terraform:  + description = "vrrp" 2025-07-23 00:01:37.364420 | orchestrator | 00:01:37.364 STDOUT terraform:  + direction = "ingress" 2025-07-23 00:01:37.364456 | orchestrator | 00:01:37.364 STDOUT terraform:  + ethertype = "IPv4" 2025-07-23 00:01:37.364490 | orchestrator | 00:01:37.364 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.364514 | orchestrator | 00:01:37.364 STDOUT terraform:  + protocol = "112" 2025-07-23 00:01:37.364549 | orchestrator | 00:01:37.364 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.364584 | orchestrator | 00:01:37.364 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-23 00:01:37.364619 | orchestrator | 00:01:37.364 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-23 00:01:37.364649 | orchestrator | 00:01:37.364 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-23 00:01:37.364684 | orchestrator | 00:01:37.364 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-23 00:01:37.364719 | orchestrator | 00:01:37.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.364725 | orchestrator | 00:01:37.364 STDOUT terraform:  } 2025-07-23 00:01:37.364774 | orchestrator | 00:01:37.364 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-23 00:01:37.364822 | orchestrator | 00:01:37.364 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-23 00:01:37.364850 | orchestrator | 00:01:37.364 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.364883 | orchestrator | 00:01:37.364 STDOUT terraform:  + description = "management security group" 2025-07-23 00:01:37.364912 | orchestrator | 00:01:37.364 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.364940 | orchestrator | 00:01:37.364 STDOUT terraform:  + name = "testbed-management" 2025-07-23 00:01:37.364969 | orchestrator | 00:01:37.364 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.364995 | orchestrator | 00:01:37.364 STDOUT terraform:  + stateful = (known after apply) 2025-07-23 00:01:37.365022 | orchestrator | 00:01:37.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.365028 | orchestrator | 00:01:37.365 STDOUT terraform:  } 2025-07-23 00:01:37.365075 | orchestrator | 00:01:37.365 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-23 00:01:37.365121 | orchestrator | 00:01:37.365 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-23 00:01:37.365151 | orchestrator | 00:01:37.365 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.365176 | orchestrator | 00:01:37.365 STDOUT terraform:  + description = "node security group" 2025-07-23 00:01:37.365203 | orchestrator | 00:01:37.365 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.365241 | orchestrator | 00:01:37.365 STDOUT terraform:  + name = "testbed-node" 2025-07-23 00:01:37.365268 | orchestrator | 00:01:37.365 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.365296 | orchestrator | 00:01:37.365 STDOUT terraform:  + stateful = (known after apply) 2025-07-23 00:01:37.365323 | orchestrator | 00:01:37.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.365346 | orchestrator | 00:01:37.365 STDOUT terraform:  } 2025-07-23 00:01:37.365384 | orchestrator | 00:01:37.365 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-23 00:01:37.365427 | orchestrator | 00:01:37.365 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-23 00:01:37.365603 | orchestrator | 00:01:37.365 STDOUT terraform:  + all_tags = (known after apply) 2025-07-23 00:01:37.365675 | orchestrator | 00:01:37.365 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-23 00:01:37.365693 | orchestrator | 00:01:37.365 STDOUT terraform:  + dns_nameservers = [ 2025-07-23 00:01:37.365711 | orchestrator | 00:01:37.365 STDOUT terraform:  + "8.8.8.8", 2025-07-23 00:01:37.365750 | orchestrator | 00:01:37.365 STDOUT terraform:  + "9.9.9.9", 2025-07-23 00:01:37.365782 | orchestrator | 00:01:37.365 STDOUT terraform:  ] 2025-07-23 00:01:37.365803 | orchestrator | 00:01:37.365 STDOUT terraform:  + enable_dhcp = true 2025-07-23 00:01:37.365813 | orchestrator | 00:01:37.365 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-23 00:01:37.365823 | orchestrator | 00:01:37.365 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.365832 | orchestrator | 00:01:37.365 STDOUT terraform:  + ip_version = 4 2025-07-23 00:01:37.365842 | orchestrator | 00:01:37.365 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-23 00:01:37.365852 | orchestrator | 00:01:37.365 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-23 00:01:37.365861 | orchestrator | 00:01:37.365 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-23 00:01:37.365871 | orchestrator | 00:01:37.365 STDOUT terraform:  + network_id = (known after apply) 2025-07-23 00:01:37.365880 | orchestrator | 00:01:37.365 STDOUT terraform:  + no_gateway = false 2025-07-23 00:01:37.365890 | orchestrator | 00:01:37.365 STDOUT terraform:  + region = (known after apply) 2025-07-23 00:01:37.365903 | orchestrator | 00:01:37.365 STDOUT terraform:  + service_types = (known after apply) 2025-07-23 00:01:37.365913 | orchestrator | 00:01:37.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-23 00:01:37.365922 | orchestrator | 00:01:37.365 STDOUT terraform:  + allocation_pool { 2025-07-23 00:01:37.365932 | orchestrator | 00:01:37.365 STDOUT terraform:  + end = "192.168.31.250" 2025-07-23 00:01:37.365942 | orchestrator | 00:01:37.365 STDOUT terraform:  + start = "192.168.31.200 2025-07-23 00:01:37.365955 | orchestrator | 00:01:37.365 STDOUT terraform: " 2025-07-23 00:01:37.365966 | orchestrator | 00:01:37.365 STDOUT terraform:  } 2025-07-23 00:01:37.365975 | orchestrator | 00:01:37.365 STDOUT terraform:  } 2025-07-23 00:01:37.365988 | orchestrator | 00:01:37.365 STDOUT terraform:  # terraform_data.image will be created 2025-07-23 00:01:37.365999 | orchestrator | 00:01:37.365 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-23 00:01:37.366053 | orchestrator | 00:01:37.365 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.366072 | orchestrator | 00:01:37.365 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-23 00:01:37.366084 | orchestrator | 00:01:37.366 STDOUT terraform:  + output = (known after apply) 2025-07-23 00:01:37.366098 | orchestrator | 00:01:37.366 STDOUT terraform:  } 2025-07-23 00:01:37.366112 | orchestrator | 00:01:37.366 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-23 00:01:37.366127 | orchestrator | 00:01:37.366 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-23 00:01:37.366172 | orchestrator | 00:01:37.366 STDOUT terraform:  + id = (known after apply) 2025-07-23 00:01:37.366188 | orchestrator | 00:01:37.366 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-23 00:01:37.366204 | orchestrator | 00:01:37.366 STDOUT terraform:  + output = (known after apply) 2025-07-23 00:01:37.366215 | orchestrator | 00:01:37.366 STDOUT terraform:  } 2025-07-23 00:01:37.366288 | orchestrator | 00:01:37.366 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-23 00:01:37.366311 | orchestrator | 00:01:37.366 STDOUT terraform: Changes to Outputs: 2025-07-23 00:01:37.366326 | orchestrator | 00:01:37.366 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-23 00:01:37.366386 | orchestrator | 00:01:37.366 STDOUT terraform:  + private_key = (sensitive value) 2025-07-23 00:01:37.610425 | orchestrator | 00:01:37.607 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-23 00:01:37.614366 | orchestrator | 00:01:37.613 STDOUT terraform: terraform_data.image: Creating... 2025-07-23 00:01:37.616259 | orchestrator | 00:01:37.614 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=dda33efa-eb8c-f813-f7e6-059853d961e4] 2025-07-23 00:01:37.618255 | orchestrator | 00:01:37.617 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=52144f00-bc18-7f07-cdcd-e9f074114de0] 2025-07-23 00:01:37.622151 | orchestrator | 00:01:37.622 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-23 00:01:37.639335 | orchestrator | 00:01:37.639 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-23 00:01:37.641421 | orchestrator | 00:01:37.641 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-23 00:01:37.641956 | orchestrator | 00:01:37.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-23 00:01:37.642123 | orchestrator | 00:01:37.642 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-23 00:01:37.642935 | orchestrator | 00:01:37.642 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-23 00:01:37.642974 | orchestrator | 00:01:37.642 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-23 00:01:37.649324 | orchestrator | 00:01:37.649 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-23 00:01:37.653992 | orchestrator | 00:01:37.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-23 00:01:37.656555 | orchestrator | 00:01:37.656 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-23 00:01:38.074719 | orchestrator | 00:01:38.073 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-23 00:01:38.083597 | orchestrator | 00:01:38.082 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-23 00:01:38.086695 | orchestrator | 00:01:38.086 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-23 00:01:38.100357 | orchestrator | 00:01:38.099 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-23 00:01:38.118139 | orchestrator | 00:01:38.117 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-23 00:01:38.125488 | orchestrator | 00:01:38.125 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-23 00:01:38.695690 | orchestrator | 00:01:38.695 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=efa91307-510a-4611-9254-96617d0ca6f0] 2025-07-23 00:01:38.708784 | orchestrator | 00:01:38.708 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-23 00:01:41.276739 | orchestrator | 00:01:41.276 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=26dd0fc0-7c8e-41b2-a72b-af248b39188e] 2025-07-23 00:01:41.283621 | orchestrator | 00:01:41.283 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-23 00:01:41.295373 | orchestrator | 00:01:41.294 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=ab57e2a3-6ec4-4ecd-8292-234effa6f9fc] 2025-07-23 00:01:41.299811 | orchestrator | 00:01:41.299 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=59086376-bc1e-4740-baf5-6fec432996bb] 2025-07-23 00:01:41.301143 | orchestrator | 00:01:41.300 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-23 00:01:41.302550 | orchestrator | 00:01:41.302 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=729cb6eb-a506-48fa-b7c2-0ab06cb3eb1b] 2025-07-23 00:01:41.306588 | orchestrator | 00:01:41.306 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-23 00:01:41.307373 | orchestrator | 00:01:41.307 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-23 00:01:41.325076 | orchestrator | 00:01:41.324 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=4a90e8bb-8d76-4e2b-8007-8b59bd326993] 2025-07-23 00:01:41.329636 | orchestrator | 00:01:41.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-23 00:01:41.356238 | orchestrator | 00:01:41.355 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=60fac7fd-4ea9-459a-a020-76cc130b6845] 2025-07-23 00:01:41.368390 | orchestrator | 00:01:41.368 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-23 00:01:41.380318 | orchestrator | 00:01:41.379 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=b736a107-6399-4bb9-9134-de071aaccb97] 2025-07-23 00:01:41.391207 | orchestrator | 00:01:41.389 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-23 00:01:41.392900 | orchestrator | 00:01:41.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=be21d2d3-5feb-4237-8a28-0ff3c2fe396b] 2025-07-23 00:01:41.396174 | orchestrator | 00:01:41.395 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=6c590b62af148b4762e68fe579547b49175d38a6] 2025-07-23 00:01:41.400769 | orchestrator | 00:01:41.400 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=3979711b-a3f7-4470-bf6b-892c60fd0047] 2025-07-23 00:01:41.403621 | orchestrator | 00:01:41.403 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-23 00:01:41.408409 | orchestrator | 00:01:41.408 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-23 00:01:41.414136 | orchestrator | 00:01:41.413 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=4f1ca915a7b3678a8816a5985f1d34a3226f3b00] 2025-07-23 00:01:42.048041 | orchestrator | 00:01:42.047 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=bfef7294-fa9d-4a17-ab47-3db72beb2ec0] 2025-07-23 00:01:42.339301 | orchestrator | 00:01:42.339 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=48e0b0df-9cf0-4b39-8df5-03313d4dca85] 2025-07-23 00:01:42.346170 | orchestrator | 00:01:42.345 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-23 00:01:44.689969 | orchestrator | 00:01:44.689 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=cdb6ad58-82a8-4872-ab4b-e46549b15bb3] 2025-07-23 00:01:44.690160 | orchestrator | 00:01:44.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=6a3531b8-19fe-4b9b-8271-0c3c04179a68] 2025-07-23 00:01:44.721532 | orchestrator | 00:01:44.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=33a0d0ee-36fe-4c17-8a2d-666237961aac] 2025-07-23 00:01:44.747717 | orchestrator | 00:01:44.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=c6b90fdf-83d9-4c7d-996a-e9b28e44a796] 2025-07-23 00:01:44.749902 | orchestrator | 00:01:44.749 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=e532ba55-c38e-4621-a13c-29a7966308b6] 2025-07-23 00:01:44.781069 | orchestrator | 00:01:44.780 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=ab3fb645-9e23-4546-9669-684c5617b75f] 2025-07-23 00:01:45.588183 | orchestrator | 00:01:45.587 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=775c3d01-5e96-4940-9507-eb8d2163e172] 2025-07-23 00:01:45.602185 | orchestrator | 00:01:45.601 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-23 00:01:45.603976 | orchestrator | 00:01:45.603 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-23 00:01:45.604634 | orchestrator | 00:01:45.604 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-23 00:01:45.928325 | orchestrator | 00:01:45.927 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=bf98adbd-f152-49bd-ae3e-ed15304d6b03] 2025-07-23 00:01:45.935026 | orchestrator | 00:01:45.934 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-23 00:01:45.935185 | orchestrator | 00:01:45.935 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-23 00:01:45.937176 | orchestrator | 00:01:45.937 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-23 00:01:45.941315 | orchestrator | 00:01:45.941 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-23 00:01:45.941366 | orchestrator | 00:01:45.941 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-23 00:01:45.942509 | orchestrator | 00:01:45.942 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-23 00:01:46.140863 | orchestrator | 00:01:46.140 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=fc6b660c-a2f1-4d4e-96a5-13454b0db054] 2025-07-23 00:01:46.155180 | orchestrator | 00:01:46.154 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-23 00:01:46.157358 | orchestrator | 00:01:46.155 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-23 00:01:46.157419 | orchestrator | 00:01:46.156 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-23 00:01:46.259246 | orchestrator | 00:01:46.258 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=6d353301-b4ac-493c-827c-c4563b07a1ae] 2025-07-23 00:01:46.274383 | orchestrator | 00:01:46.274 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-23 00:01:46.316475 | orchestrator | 00:01:46.316 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=14a0a921-aee3-45f9-ab85-2485cab50e5a] 2025-07-23 00:01:46.327971 | orchestrator | 00:01:46.327 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-23 00:01:46.460620 | orchestrator | 00:01:46.460 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=c4b068aa-2b91-43e6-82d7-20bf1b6463a1] 2025-07-23 00:01:46.467748 | orchestrator | 00:01:46.467 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-23 00:01:46.643630 | orchestrator | 00:01:46.643 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=5968a090-416e-4550-9f02-4b0768005665] 2025-07-23 00:01:46.661826 | orchestrator | 00:01:46.661 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-23 00:01:46.811931 | orchestrator | 00:01:46.811 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=5eacb3cc-bd72-4ed0-a1f3-32d3caed7769] 2025-07-23 00:01:46.828932 | orchestrator | 00:01:46.828 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6f103ddf-b9ee-4846-8360-886daecdd6b6] 2025-07-23 00:01:46.829712 | orchestrator | 00:01:46.829 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-23 00:01:46.842777 | orchestrator | 00:01:46.842 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-23 00:01:46.958716 | orchestrator | 00:01:46.958 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=8f11eba0-337b-470b-9927-f4644db8953a] 2025-07-23 00:01:46.971378 | orchestrator | 00:01:46.971 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-23 00:01:46.988516 | orchestrator | 00:01:46.988 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=0ca415bc-f103-4c23-9fbe-4c80a75904c2] 2025-07-23 00:01:47.091882 | orchestrator | 00:01:47.091 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5f8424f7-c0e2-41d3-bc1b-328dfd7c8efd] 2025-07-23 00:01:47.098234 | orchestrator | 00:01:47.098 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=fdcb1854-86ff-40c3-947a-083d295e2854] 2025-07-23 00:01:47.148636 | orchestrator | 00:01:47.148 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=b700a5a8-6365-4737-9f35-7a5c2215c263] 2025-07-23 00:01:47.482204 | orchestrator | 00:01:47.481 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=47a50888-449f-4aad-8b9c-3891cb1c2c00] 2025-07-23 00:01:47.554627 | orchestrator | 00:01:47.554 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=903884a7-9ef8-402f-a808-339c560181db] 2025-07-23 00:01:47.623784 | orchestrator | 00:01:47.623 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=a8bb3380-49d6-41ec-a8f9-b6a2d68929be] 2025-07-23 00:01:47.650796 | orchestrator | 00:01:47.650 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=33000d2c-354d-4535-b86b-d759e2817ce2] 2025-07-23 00:01:48.358759 | orchestrator | 00:01:48.358 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=c9014b1f-2ac2-4972-baa9-6e88bef822b8] 2025-07-23 00:01:48.365403 | orchestrator | 00:01:48.365 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-23 00:01:48.652115 | orchestrator | 00:01:48.651 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=817163fb-9b7d-4329-ad69-9c7a329c1ff3] 2025-07-23 00:01:48.679162 | orchestrator | 00:01:48.678 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-23 00:01:48.688408 | orchestrator | 00:01:48.688 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-23 00:01:48.691011 | orchestrator | 00:01:48.690 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-23 00:01:48.710413 | orchestrator | 00:01:48.709 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-23 00:01:48.710524 | orchestrator | 00:01:48.710 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-23 00:01:48.715081 | orchestrator | 00:01:48.714 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-23 00:01:50.332069 | orchestrator | 00:01:50.331 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=441244da-fc3e-4142-a9e9-0ac8dd1ffd68] 2025-07-23 00:01:50.342560 | orchestrator | 00:01:50.342 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-23 00:01:50.348151 | orchestrator | 00:01:50.348 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-23 00:01:50.348284 | orchestrator | 00:01:50.348 STDOUT terraform: local_file.inventory: Creating... 2025-07-23 00:01:50.352213 | orchestrator | 00:01:50.352 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=ca37ba2853cd747de235b8122f8a98245f5b6b15] 2025-07-23 00:01:50.352603 | orchestrator | 00:01:50.352 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=21c732b15a427bedc43cdd24080c4f564f714cdd] 2025-07-23 00:01:51.711087 | orchestrator | 00:01:51.710 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=441244da-fc3e-4142-a9e9-0ac8dd1ffd68] 2025-07-23 00:01:58.680991 | orchestrator | 00:01:58.680 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-23 00:01:58.693295 | orchestrator | 00:01:58.693 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-23 00:01:58.694619 | orchestrator | 00:01:58.694 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-23 00:01:58.711047 | orchestrator | 00:01:58.710 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-23 00:01:58.711234 | orchestrator | 00:01:58.710 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-23 00:01:58.718216 | orchestrator | 00:01:58.717 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-23 00:02:08.681128 | orchestrator | 00:02:08.680 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-23 00:02:08.694477 | orchestrator | 00:02:08.694 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-23 00:02:08.694575 | orchestrator | 00:02:08.694 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-23 00:02:08.711568 | orchestrator | 00:02:08.711 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-23 00:02:08.711824 | orchestrator | 00:02:08.711 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-23 00:02:08.719092 | orchestrator | 00:02:08.718 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-23 00:02:18.682806 | orchestrator | 00:02:18.682 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-23 00:02:18.694888 | orchestrator | 00:02:18.694 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-23 00:02:18.694957 | orchestrator | 00:02:18.694 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-23 00:02:18.712117 | orchestrator | 00:02:18.711 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-23 00:02:18.712194 | orchestrator | 00:02:18.712 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-23 00:02:18.719412 | orchestrator | 00:02:18.719 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-23 00:02:19.232733 | orchestrator | 00:02:19.232 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=ac0c0a1d-e8fb-4796-b203-bbcf402eacb0] 2025-07-23 00:02:19.365079 | orchestrator | 00:02:19.364 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=f139afef-3c57-4997-92d3-9a000076f81d] 2025-07-23 00:02:19.384029 | orchestrator | 00:02:19.383 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=649e5c17-2f09-4b78-b628-6ce0f8be34d3] 2025-07-23 00:02:19.529386 | orchestrator | 00:02:19.528 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=baeb406e-5914-46ad-a560-022cb3f1e47b] 2025-07-23 00:02:28.698266 | orchestrator | 00:02:28.697 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-07-23 00:02:28.712375 | orchestrator | 00:02:28.712 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-07-23 00:02:29.669109 | orchestrator | 00:02:29.668 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=2b34fb85-e087-45e1-87c6-c27149ef39ee] 2025-07-23 00:02:29.803539 | orchestrator | 00:02:29.803 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=ec570297-1684-4ccf-99d9-df50fc8b2c41] 2025-07-23 00:02:29.823764 | orchestrator | 00:02:29.823 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-23 00:02:29.829854 | orchestrator | 00:02:29.828 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3837907884867299050] 2025-07-23 00:02:29.829916 | orchestrator | 00:02:29.829 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-23 00:02:29.829943 | orchestrator | 00:02:29.829 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-23 00:02:29.829948 | orchestrator | 00:02:29.829 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-23 00:02:29.830515 | orchestrator | 00:02:29.830 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-23 00:02:29.847425 | orchestrator | 00:02:29.847 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-23 00:02:29.852430 | orchestrator | 00:02:29.852 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-23 00:02:29.862435 | orchestrator | 00:02:29.862 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-23 00:02:29.865224 | orchestrator | 00:02:29.865 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-23 00:02:29.866766 | orchestrator | 00:02:29.866 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-23 00:02:29.878262 | orchestrator | 00:02:29.878 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-23 00:02:33.237572 | orchestrator | 00:02:33.237 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=649e5c17-2f09-4b78-b628-6ce0f8be34d3/729cb6eb-a506-48fa-b7c2-0ab06cb3eb1b] 2025-07-23 00:02:33.239421 | orchestrator | 00:02:33.238 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=baeb406e-5914-46ad-a560-022cb3f1e47b/b736a107-6399-4bb9-9134-de071aaccb97] 2025-07-23 00:02:33.273191 | orchestrator | 00:02:33.272 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=649e5c17-2f09-4b78-b628-6ce0f8be34d3/59086376-bc1e-4740-baf5-6fec432996bb] 2025-07-23 00:02:33.275635 | orchestrator | 00:02:33.275 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=2b34fb85-e087-45e1-87c6-c27149ef39ee/4a90e8bb-8d76-4e2b-8007-8b59bd326993] 2025-07-23 00:02:33.308662 | orchestrator | 00:02:33.308 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=baeb406e-5914-46ad-a560-022cb3f1e47b/ab57e2a3-6ec4-4ecd-8292-234effa6f9fc] 2025-07-23 00:02:33.323174 | orchestrator | 00:02:33.322 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=2b34fb85-e087-45e1-87c6-c27149ef39ee/26dd0fc0-7c8e-41b2-a72b-af248b39188e] 2025-07-23 00:02:39.411138 | orchestrator | 00:02:39.410 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=baeb406e-5914-46ad-a560-022cb3f1e47b/60fac7fd-4ea9-459a-a020-76cc130b6845] 2025-07-23 00:02:39.433819 | orchestrator | 00:02:39.433 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=649e5c17-2f09-4b78-b628-6ce0f8be34d3/be21d2d3-5feb-4237-8a28-0ff3c2fe396b] 2025-07-23 00:02:39.442062 | orchestrator | 00:02:39.441 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=2b34fb85-e087-45e1-87c6-c27149ef39ee/3979711b-a3f7-4470-bf6b-892c60fd0047] 2025-07-23 00:02:39.880967 | orchestrator | 00:02:39.880 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-23 00:02:49.882061 | orchestrator | 00:02:49.881 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-23 00:02:50.324433 | orchestrator | 00:02:50.324 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=d042d08d-bad4-4872-a3c8-bcd138a7cad2] 2025-07-23 00:02:50.349667 | orchestrator | 00:02:50.349 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-23 00:02:50.349756 | orchestrator | 00:02:50.349 STDOUT terraform: Outputs: 2025-07-23 00:02:50.349782 | orchestrator | 00:02:50.349 STDOUT terraform: manager_address = 2025-07-23 00:02:50.349795 | orchestrator | 00:02:50.349 STDOUT terraform: private_key = 2025-07-23 00:02:50.609760 | orchestrator | ok: Runtime: 0:01:19.378146 2025-07-23 00:02:50.631692 | 2025-07-23 00:02:50.631788 | TASK [Create infrastructure (stable)] 2025-07-23 00:02:51.160694 | orchestrator | skipping: Conditional result was False 2025-07-23 00:02:51.178963 | 2025-07-23 00:02:51.179116 | TASK [Fetch manager address] 2025-07-23 00:02:51.612624 | orchestrator | ok 2025-07-23 00:02:51.622530 | 2025-07-23 00:02:51.622637 | TASK [Set manager_host address] 2025-07-23 00:02:51.689563 | orchestrator | ok 2025-07-23 00:02:51.698119 | 2025-07-23 00:02:51.698220 | LOOP [Update ansible collections] 2025-07-23 00:02:52.825954 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-23 00:02:52.826375 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-23 00:02:52.826453 | orchestrator | Starting galaxy collection install process 2025-07-23 00:02:52.826503 | orchestrator | Process install dependency map 2025-07-23 00:02:52.826546 | orchestrator | Starting collection install process 2025-07-23 00:02:52.826583 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-07-23 00:02:52.826638 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-07-23 00:02:52.826683 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-23 00:02:52.826762 | orchestrator | ok: Item: commons Runtime: 0:00:00.819865 2025-07-23 00:02:54.309800 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-23 00:02:54.309938 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-23 00:02:54.309983 | orchestrator | Starting galaxy collection install process 2025-07-23 00:02:54.310016 | orchestrator | Process install dependency map 2025-07-23 00:02:54.310047 | orchestrator | Starting collection install process 2025-07-23 00:02:54.310076 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-07-23 00:02:54.310104 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-07-23 00:02:54.310131 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-23 00:02:54.310173 | orchestrator | ok: Item: services Runtime: 0:00:01.222269 2025-07-23 00:02:54.328080 | 2025-07-23 00:02:54.328199 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-23 00:03:04.878690 | orchestrator | ok 2025-07-23 00:03:04.886888 | 2025-07-23 00:03:04.887006 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-23 00:04:04.933069 | orchestrator | ok 2025-07-23 00:04:04.943630 | 2025-07-23 00:04:04.943762 | TASK [Fetch manager ssh hostkey] 2025-07-23 00:04:06.525091 | orchestrator | Output suppressed because no_log was given 2025-07-23 00:04:06.540719 | 2025-07-23 00:04:06.540880 | TASK [Get ssh keypair from terraform environment] 2025-07-23 00:04:07.080607 | orchestrator | ok: Runtime: 0:00:00.008421 2025-07-23 00:04:07.095847 | 2025-07-23 00:04:07.096015 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-23 00:04:07.145728 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-23 00:04:07.156817 | 2025-07-23 00:04:07.156954 | TASK [Run manager part 0] 2025-07-23 00:04:08.180225 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-23 00:04:08.232277 | orchestrator | 2025-07-23 00:04:08.232418 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-23 00:04:08.232430 | orchestrator | 2025-07-23 00:04:08.232461 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-23 00:04:10.042574 | orchestrator | ok: [testbed-manager] 2025-07-23 00:04:10.042619 | orchestrator | 2025-07-23 00:04:10.042645 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-23 00:04:10.042657 | orchestrator | 2025-07-23 00:04:10.042670 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:04:11.966051 | orchestrator | ok: [testbed-manager] 2025-07-23 00:04:11.966103 | orchestrator | 2025-07-23 00:04:11.966113 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-23 00:04:12.666113 | orchestrator | ok: [testbed-manager] 2025-07-23 00:04:12.666157 | orchestrator | 2025-07-23 00:04:12.666164 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-23 00:04:12.714489 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:04:12.714524 | orchestrator | 2025-07-23 00:04:12.714533 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-23 00:04:12.751424 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:04:12.751439 | orchestrator | 2025-07-23 00:04:12.751463 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-23 00:04:12.782910 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:04:12.782934 | orchestrator | 2025-07-23 00:04:12.782940 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-23 00:04:12.809221 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:04:12.809238 | orchestrator | 2025-07-23 00:04:12.809243 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-23 00:04:12.832607 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:04:12.832626 | orchestrator | 2025-07-23 00:04:12.832633 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-23 00:04:12.859629 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:04:12.859658 | orchestrator | 2025-07-23 00:04:12.859664 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-23 00:04:12.892164 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:04:12.892196 | orchestrator | 2025-07-23 00:04:12.892204 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-23 00:04:13.682287 | orchestrator | changed: [testbed-manager] 2025-07-23 00:04:13.682328 | orchestrator | 2025-07-23 00:04:13.682334 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-23 00:06:33.227881 | orchestrator | changed: [testbed-manager] 2025-07-23 00:06:33.227961 | orchestrator | 2025-07-23 00:06:33.227979 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-23 00:07:52.005433 | orchestrator | changed: [testbed-manager] 2025-07-23 00:07:52.005562 | orchestrator | 2025-07-23 00:07:52.005581 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-23 00:08:14.822426 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:14.822551 | orchestrator | 2025-07-23 00:08:14.822572 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-23 00:08:23.645855 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:23.645971 | orchestrator | 2025-07-23 00:08:23.645986 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-23 00:08:23.695719 | orchestrator | ok: [testbed-manager] 2025-07-23 00:08:23.695778 | orchestrator | 2025-07-23 00:08:23.695786 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-23 00:08:24.541925 | orchestrator | ok: [testbed-manager] 2025-07-23 00:08:24.541967 | orchestrator | 2025-07-23 00:08:24.541977 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-23 00:08:25.306605 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:25.306648 | orchestrator | 2025-07-23 00:08:25.306657 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-23 00:08:31.781702 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:31.781801 | orchestrator | 2025-07-23 00:08:31.781843 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-23 00:08:37.962430 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:37.962520 | orchestrator | 2025-07-23 00:08:37.962537 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-23 00:08:40.542633 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:40.542722 | orchestrator | 2025-07-23 00:08:40.542740 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-23 00:08:42.329984 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:42.330227 | orchestrator | 2025-07-23 00:08:42.330261 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-23 00:08:43.443870 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-23 00:08:43.443920 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-23 00:08:43.443928 | orchestrator | 2025-07-23 00:08:43.443936 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-23 00:08:43.495105 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-23 00:08:43.495223 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-23 00:08:43.495243 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-23 00:08:43.495258 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-23 00:08:48.190138 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-23 00:08:48.190235 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-23 00:08:48.190252 | orchestrator | 2025-07-23 00:08:48.190266 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-23 00:08:48.768149 | orchestrator | changed: [testbed-manager] 2025-07-23 00:08:48.768273 | orchestrator | 2025-07-23 00:08:48.768290 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-23 00:09:09.360816 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-23 00:09:09.360912 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-23 00:09:09.360931 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-23 00:09:09.360944 | orchestrator | 2025-07-23 00:09:09.360957 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-23 00:09:11.773504 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-23 00:09:11.773604 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-23 00:09:11.773619 | orchestrator | 2025-07-23 00:09:11.773633 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-23 00:09:11.773645 | orchestrator | 2025-07-23 00:09:11.773657 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:09:13.152633 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:13.152727 | orchestrator | 2025-07-23 00:09:13.152746 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-23 00:09:13.203494 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:13.203557 | orchestrator | 2025-07-23 00:09:13.203567 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-23 00:09:13.271956 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:13.272010 | orchestrator | 2025-07-23 00:09:13.272017 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-23 00:09:14.073156 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:14.073245 | orchestrator | 2025-07-23 00:09:14.073261 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-23 00:09:14.893700 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:14.894650 | orchestrator | 2025-07-23 00:09:14.894710 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-23 00:09:16.363568 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-23 00:09:16.363657 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-23 00:09:16.363673 | orchestrator | 2025-07-23 00:09:16.363701 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-23 00:09:17.817138 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:17.817230 | orchestrator | 2025-07-23 00:09:17.817246 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-23 00:09:19.585980 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-23 00:09:19.586126 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-23 00:09:19.586140 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-23 00:09:19.586148 | orchestrator | 2025-07-23 00:09:19.586156 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-23 00:09:19.647758 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:19.647802 | orchestrator | 2025-07-23 00:09:19.647811 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-23 00:09:20.236556 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:20.237120 | orchestrator | 2025-07-23 00:09:20.237144 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-23 00:09:20.311478 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:20.311544 | orchestrator | 2025-07-23 00:09:20.311558 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-23 00:09:21.187022 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-23 00:09:21.187114 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:21.187278 | orchestrator | 2025-07-23 00:09:21.187297 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-23 00:09:21.222784 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:21.222869 | orchestrator | 2025-07-23 00:09:21.222902 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-23 00:09:21.256230 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:21.256304 | orchestrator | 2025-07-23 00:09:21.256320 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-23 00:09:21.286211 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:21.286280 | orchestrator | 2025-07-23 00:09:21.286296 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-23 00:09:21.328839 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:21.328891 | orchestrator | 2025-07-23 00:09:21.328903 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-23 00:09:22.085221 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:22.085278 | orchestrator | 2025-07-23 00:09:22.085289 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-23 00:09:22.085298 | orchestrator | 2025-07-23 00:09:22.085306 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:09:23.524544 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:23.524577 | orchestrator | 2025-07-23 00:09:23.524582 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-23 00:09:24.517776 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:24.517869 | orchestrator | 2025-07-23 00:09:24.517887 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-23 00:09:24.517901 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-23 00:09:24.517913 | orchestrator | 2025-07-23 00:09:24.883039 | orchestrator | ok: Runtime: 0:05:17.171542 2025-07-23 00:09:24.900412 | 2025-07-23 00:09:24.900568 | TASK [Point out that the log in on the manager is now possible] 2025-07-23 00:09:24.950474 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-23 00:09:24.960924 | 2025-07-23 00:09:24.961070 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-23 00:09:25.011700 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-23 00:09:25.021905 | 2025-07-23 00:09:25.022050 | TASK [Run manager part 1 + 2] 2025-07-23 00:09:25.905020 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-23 00:09:25.958511 | orchestrator | 2025-07-23 00:09:25.958562 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-23 00:09:25.958569 | orchestrator | 2025-07-23 00:09:25.958582 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:09:28.564676 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:28.564733 | orchestrator | 2025-07-23 00:09:28.564763 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-23 00:09:28.599517 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:28.599574 | orchestrator | 2025-07-23 00:09:28.599585 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-23 00:09:28.647306 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:28.647360 | orchestrator | 2025-07-23 00:09:28.647375 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-23 00:09:28.687057 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:28.687114 | orchestrator | 2025-07-23 00:09:28.687124 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-23 00:09:28.785968 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:28.786039 | orchestrator | 2025-07-23 00:09:28.786050 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-23 00:09:28.847602 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:28.847660 | orchestrator | 2025-07-23 00:09:28.847670 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-23 00:09:28.892940 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-23 00:09:28.892980 | orchestrator | 2025-07-23 00:09:28.892985 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-23 00:09:29.619814 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:29.619883 | orchestrator | 2025-07-23 00:09:29.619900 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-23 00:09:29.672672 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:29.672727 | orchestrator | 2025-07-23 00:09:29.672735 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-23 00:09:31.152167 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:31.152224 | orchestrator | 2025-07-23 00:09:31.152235 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-23 00:09:31.735626 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:31.735733 | orchestrator | 2025-07-23 00:09:31.735755 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-23 00:09:32.909887 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:32.909984 | orchestrator | 2025-07-23 00:09:32.910003 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-23 00:09:50.053796 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:50.053862 | orchestrator | 2025-07-23 00:09:50.053876 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-23 00:09:50.747540 | orchestrator | ok: [testbed-manager] 2025-07-23 00:09:50.747688 | orchestrator | 2025-07-23 00:09:50.747706 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-23 00:09:50.805275 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:09:50.805369 | orchestrator | 2025-07-23 00:09:50.805386 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-23 00:09:51.781825 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:51.781913 | orchestrator | 2025-07-23 00:09:51.781928 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-23 00:09:52.796435 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:52.796553 | orchestrator | 2025-07-23 00:09:52.796570 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-23 00:09:53.380241 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:53.380316 | orchestrator | 2025-07-23 00:09:53.380328 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-23 00:09:53.413859 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-23 00:09:53.413924 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-23 00:09:53.413930 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-23 00:09:53.413935 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-23 00:09:56.268750 | orchestrator | changed: [testbed-manager] 2025-07-23 00:09:56.268852 | orchestrator | 2025-07-23 00:09:56.268868 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-23 00:10:06.151222 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-23 00:10:06.151327 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-23 00:10:06.151345 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-23 00:10:06.151358 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-23 00:10:06.151377 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-23 00:10:06.151388 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-23 00:10:06.151400 | orchestrator | 2025-07-23 00:10:06.151412 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-23 00:10:07.234439 | orchestrator | changed: [testbed-manager] 2025-07-23 00:10:07.234549 | orchestrator | 2025-07-23 00:10:07.234567 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-23 00:10:07.275781 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:10:07.275886 | orchestrator | 2025-07-23 00:10:07.275912 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-23 00:10:10.285115 | orchestrator | changed: [testbed-manager] 2025-07-23 00:10:10.285692 | orchestrator | 2025-07-23 00:10:10.285716 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-23 00:10:10.322902 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:10:10.322972 | orchestrator | 2025-07-23 00:10:10.322984 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-23 00:11:53.416597 | orchestrator | changed: [testbed-manager] 2025-07-23 00:11:53.416695 | orchestrator | 2025-07-23 00:11:53.416716 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-23 00:11:54.605980 | orchestrator | ok: [testbed-manager] 2025-07-23 00:11:54.606093 | orchestrator | 2025-07-23 00:11:54.606112 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-23 00:11:54.606126 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-23 00:11:54.606138 | orchestrator | 2025-07-23 00:11:55.173635 | orchestrator | ok: Runtime: 0:02:29.353092 2025-07-23 00:11:55.191020 | 2025-07-23 00:11:55.191164 | TASK [Reboot manager] 2025-07-23 00:11:56.725911 | orchestrator | ok: Runtime: 0:00:01.018659 2025-07-23 00:11:56.742689 | 2025-07-23 00:11:56.742869 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-23 00:12:13.142277 | orchestrator | ok 2025-07-23 00:12:13.150306 | 2025-07-23 00:12:13.150423 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-23 00:13:13.191174 | orchestrator | ok 2025-07-23 00:13:13.202435 | 2025-07-23 00:13:13.202583 | TASK [Deploy manager + bootstrap nodes] 2025-07-23 00:13:16.229987 | orchestrator | 2025-07-23 00:13:16.230136 | orchestrator | # DEPLOY MANAGER 2025-07-23 00:13:16.230152 | orchestrator | 2025-07-23 00:13:16.230184 | orchestrator | + set -e 2025-07-23 00:13:16.230193 | orchestrator | + echo 2025-07-23 00:13:16.230202 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-23 00:13:16.230212 | orchestrator | + echo 2025-07-23 00:13:16.230241 | orchestrator | + cat /opt/manager-vars.sh 2025-07-23 00:13:16.234174 | orchestrator | export NUMBER_OF_NODES=6 2025-07-23 00:13:16.234223 | orchestrator | 2025-07-23 00:13:16.234231 | orchestrator | export CEPH_VERSION=reef 2025-07-23 00:13:16.234239 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-23 00:13:16.234247 | orchestrator | export MANAGER_VERSION=latest 2025-07-23 00:13:16.234264 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-23 00:13:16.234270 | orchestrator | 2025-07-23 00:13:16.234292 | orchestrator | export ARA=false 2025-07-23 00:13:16.234306 | orchestrator | export DEPLOY_MODE=manager 2025-07-23 00:13:16.234317 | orchestrator | export TEMPEST=true 2025-07-23 00:13:16.234323 | orchestrator | export IS_ZUUL=true 2025-07-23 00:13:16.234330 | orchestrator | 2025-07-23 00:13:16.234340 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.166 2025-07-23 00:13:16.234347 | orchestrator | export EXTERNAL_API=false 2025-07-23 00:13:16.234354 | orchestrator | 2025-07-23 00:13:16.234360 | orchestrator | export IMAGE_USER=ubuntu 2025-07-23 00:13:16.234369 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-23 00:13:16.234375 | orchestrator | 2025-07-23 00:13:16.234381 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-23 00:13:16.234433 | orchestrator | 2025-07-23 00:13:16.234451 | orchestrator | + echo 2025-07-23 00:13:16.234459 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-23 00:13:16.235618 | orchestrator | ++ export INTERACTIVE=false 2025-07-23 00:13:16.235694 | orchestrator | ++ INTERACTIVE=false 2025-07-23 00:13:16.235711 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-23 00:13:16.235726 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-23 00:13:16.235910 | orchestrator | + source /opt/manager-vars.sh 2025-07-23 00:13:16.235927 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-23 00:13:16.235939 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-23 00:13:16.235949 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-23 00:13:16.235959 | orchestrator | ++ CEPH_VERSION=reef 2025-07-23 00:13:16.235969 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-23 00:13:16.235980 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-23 00:13:16.235991 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-23 00:13:16.236001 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-23 00:13:16.236010 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-23 00:13:16.236034 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-23 00:13:16.236045 | orchestrator | ++ export ARA=false 2025-07-23 00:13:16.236055 | orchestrator | ++ ARA=false 2025-07-23 00:13:16.236065 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-23 00:13:16.236075 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-23 00:13:16.236084 | orchestrator | ++ export TEMPEST=true 2025-07-23 00:13:16.236094 | orchestrator | ++ TEMPEST=true 2025-07-23 00:13:16.236104 | orchestrator | ++ export IS_ZUUL=true 2025-07-23 00:13:16.236113 | orchestrator | ++ IS_ZUUL=true 2025-07-23 00:13:16.236123 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.166 2025-07-23 00:13:16.236133 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.166 2025-07-23 00:13:16.236143 | orchestrator | ++ export EXTERNAL_API=false 2025-07-23 00:13:16.236152 | orchestrator | ++ EXTERNAL_API=false 2025-07-23 00:13:16.236162 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-23 00:13:16.236172 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-23 00:13:16.236182 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-23 00:13:16.236191 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-23 00:13:16.236201 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-23 00:13:16.236211 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-23 00:13:16.236263 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-23 00:13:16.294807 | orchestrator | + docker version 2025-07-23 00:13:16.588925 | orchestrator | Client: Docker Engine - Community 2025-07-23 00:13:16.589031 | orchestrator | Version: 27.5.1 2025-07-23 00:13:16.589047 | orchestrator | API version: 1.47 2025-07-23 00:13:16.589061 | orchestrator | Go version: go1.22.11 2025-07-23 00:13:16.589072 | orchestrator | Git commit: 9f9e405 2025-07-23 00:13:16.589084 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-23 00:13:16.589097 | orchestrator | OS/Arch: linux/amd64 2025-07-23 00:13:16.589108 | orchestrator | Context: default 2025-07-23 00:13:16.589119 | orchestrator | 2025-07-23 00:13:16.589131 | orchestrator | Server: Docker Engine - Community 2025-07-23 00:13:16.589142 | orchestrator | Engine: 2025-07-23 00:13:16.589160 | orchestrator | Version: 27.5.1 2025-07-23 00:13:16.589172 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-23 00:13:16.589212 | orchestrator | Go version: go1.22.11 2025-07-23 00:13:16.589224 | orchestrator | Git commit: 4c9b3b0 2025-07-23 00:13:16.589236 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-23 00:13:16.589247 | orchestrator | OS/Arch: linux/amd64 2025-07-23 00:13:16.589258 | orchestrator | Experimental: false 2025-07-23 00:13:16.589269 | orchestrator | containerd: 2025-07-23 00:13:16.589362 | orchestrator | Version: 1.7.27 2025-07-23 00:13:16.589378 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-23 00:13:16.589390 | orchestrator | runc: 2025-07-23 00:13:16.589402 | orchestrator | Version: 1.2.5 2025-07-23 00:13:16.589413 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-23 00:13:16.589425 | orchestrator | docker-init: 2025-07-23 00:13:16.589436 | orchestrator | Version: 0.19.0 2025-07-23 00:13:16.589448 | orchestrator | GitCommit: de40ad0 2025-07-23 00:13:16.592815 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-23 00:13:16.603097 | orchestrator | + set -e 2025-07-23 00:13:16.603153 | orchestrator | + source /opt/manager-vars.sh 2025-07-23 00:13:16.603166 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-23 00:13:16.603178 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-23 00:13:16.603189 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-23 00:13:16.603200 | orchestrator | ++ CEPH_VERSION=reef 2025-07-23 00:13:16.603212 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-23 00:13:16.603223 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-23 00:13:16.603234 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-23 00:13:16.603245 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-23 00:13:16.603256 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-23 00:13:16.603267 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-23 00:13:16.603278 | orchestrator | ++ export ARA=false 2025-07-23 00:13:16.603289 | orchestrator | ++ ARA=false 2025-07-23 00:13:16.603299 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-23 00:13:16.603311 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-23 00:13:16.603321 | orchestrator | ++ export TEMPEST=true 2025-07-23 00:13:16.603332 | orchestrator | ++ TEMPEST=true 2025-07-23 00:13:16.603370 | orchestrator | ++ export IS_ZUUL=true 2025-07-23 00:13:16.603382 | orchestrator | ++ IS_ZUUL=true 2025-07-23 00:13:16.603393 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.166 2025-07-23 00:13:16.603404 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.166 2025-07-23 00:13:16.603415 | orchestrator | ++ export EXTERNAL_API=false 2025-07-23 00:13:16.603426 | orchestrator | ++ EXTERNAL_API=false 2025-07-23 00:13:16.603437 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-23 00:13:16.603448 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-23 00:13:16.603458 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-23 00:13:16.603469 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-23 00:13:16.603685 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-23 00:13:16.603716 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-23 00:13:16.603735 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-23 00:13:16.603753 | orchestrator | ++ export INTERACTIVE=false 2025-07-23 00:13:16.603772 | orchestrator | ++ INTERACTIVE=false 2025-07-23 00:13:16.603790 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-23 00:13:16.603815 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-23 00:13:16.603834 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-23 00:13:16.603970 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-23 00:13:16.603997 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-23 00:13:16.609282 | orchestrator | + set -e 2025-07-23 00:13:16.609313 | orchestrator | + VERSION=reef 2025-07-23 00:13:16.610151 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-23 00:13:16.616176 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-23 00:13:16.616210 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-23 00:13:16.621521 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-23 00:13:16.626438 | orchestrator | + set -e 2025-07-23 00:13:16.626472 | orchestrator | + VERSION=2024.2 2025-07-23 00:13:16.627456 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-23 00:13:16.631825 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-23 00:13:16.631859 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-23 00:13:16.636376 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-23 00:13:16.637374 | orchestrator | ++ semver latest 7.0.0 2025-07-23 00:13:16.701804 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-23 00:13:16.701902 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-23 00:13:16.701916 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-23 00:13:16.701929 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-23 00:13:16.799598 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-23 00:13:16.802809 | orchestrator | + source /opt/venv/bin/activate 2025-07-23 00:13:16.803814 | orchestrator | ++ deactivate nondestructive 2025-07-23 00:13:16.803837 | orchestrator | ++ '[' -n '' ']' 2025-07-23 00:13:16.803844 | orchestrator | ++ '[' -n '' ']' 2025-07-23 00:13:16.803851 | orchestrator | ++ hash -r 2025-07-23 00:13:16.803857 | orchestrator | ++ '[' -n '' ']' 2025-07-23 00:13:16.803864 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-23 00:13:16.803871 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-23 00:13:16.804004 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-23 00:13:16.804017 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-23 00:13:16.804025 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-23 00:13:16.804032 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-23 00:13:16.804039 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-23 00:13:16.804046 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-23 00:13:16.804053 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-23 00:13:16.804060 | orchestrator | ++ export PATH 2025-07-23 00:13:16.804066 | orchestrator | ++ '[' -n '' ']' 2025-07-23 00:13:16.804131 | orchestrator | ++ '[' -z '' ']' 2025-07-23 00:13:16.804139 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-23 00:13:16.804146 | orchestrator | ++ PS1='(venv) ' 2025-07-23 00:13:16.804153 | orchestrator | ++ export PS1 2025-07-23 00:13:16.804159 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-23 00:13:16.804166 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-23 00:13:16.804172 | orchestrator | ++ hash -r 2025-07-23 00:13:16.804303 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-23 00:13:18.143442 | orchestrator | 2025-07-23 00:13:18.143558 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-23 00:13:18.143568 | orchestrator | 2025-07-23 00:13:18.143573 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-23 00:13:18.727496 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:18.727604 | orchestrator | 2025-07-23 00:13:18.727612 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-23 00:13:19.731536 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:19.731609 | orchestrator | 2025-07-23 00:13:19.731616 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-23 00:13:19.731622 | orchestrator | 2025-07-23 00:13:19.731626 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:13:22.224186 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:22.224264 | orchestrator | 2025-07-23 00:13:22.224271 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-23 00:13:22.284676 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:22.284730 | orchestrator | 2025-07-23 00:13:22.284737 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-23 00:13:22.758670 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:22.758739 | orchestrator | 2025-07-23 00:13:22.758747 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-23 00:13:22.797322 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:13:22.797353 | orchestrator | 2025-07-23 00:13:22.797359 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-23 00:13:23.150571 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:23.150661 | orchestrator | 2025-07-23 00:13:23.150674 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-23 00:13:23.206492 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:13:23.206595 | orchestrator | 2025-07-23 00:13:23.206604 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-23 00:13:23.537398 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:23.537470 | orchestrator | 2025-07-23 00:13:23.537476 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-23 00:13:23.651377 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:13:23.651457 | orchestrator | 2025-07-23 00:13:23.651467 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-23 00:13:23.651475 | orchestrator | 2025-07-23 00:13:23.651483 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:13:25.532656 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:25.532783 | orchestrator | 2025-07-23 00:13:25.532812 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-23 00:13:25.655196 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-23 00:13:25.655302 | orchestrator | 2025-07-23 00:13:25.655319 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-23 00:13:25.716027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-23 00:13:25.716148 | orchestrator | 2025-07-23 00:13:25.716176 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-23 00:13:26.827912 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-23 00:13:26.828006 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-23 00:13:26.828018 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-23 00:13:26.828026 | orchestrator | 2025-07-23 00:13:26.828035 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-23 00:13:28.667746 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-23 00:13:28.667856 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-23 00:13:28.667875 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-23 00:13:28.667888 | orchestrator | 2025-07-23 00:13:28.667901 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-23 00:13:29.343246 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-23 00:13:29.343351 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:29.343366 | orchestrator | 2025-07-23 00:13:29.343378 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-23 00:13:30.020933 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-23 00:13:30.021037 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:30.021054 | orchestrator | 2025-07-23 00:13:30.021067 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-23 00:13:30.087645 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:13:30.087742 | orchestrator | 2025-07-23 00:13:30.087757 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-23 00:13:30.467920 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:30.468027 | orchestrator | 2025-07-23 00:13:30.468044 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-23 00:13:30.547050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-23 00:13:30.547150 | orchestrator | 2025-07-23 00:13:30.547167 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-23 00:13:31.648670 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:31.648791 | orchestrator | 2025-07-23 00:13:31.648810 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-23 00:13:32.520989 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:32.521073 | orchestrator | 2025-07-23 00:13:32.521081 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-23 00:13:45.619484 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:45.619680 | orchestrator | 2025-07-23 00:13:45.619710 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-23 00:13:45.660784 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:13:45.660879 | orchestrator | 2025-07-23 00:13:45.660896 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-23 00:13:45.660909 | orchestrator | 2025-07-23 00:13:45.660921 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:13:48.650320 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:48.650458 | orchestrator | 2025-07-23 00:13:48.650513 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-23 00:13:48.755149 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-23 00:13:48.755271 | orchestrator | 2025-07-23 00:13:48.755286 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-23 00:13:48.815284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-23 00:13:48.815419 | orchestrator | 2025-07-23 00:13:48.815444 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-23 00:13:52.955380 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:52.955510 | orchestrator | 2025-07-23 00:13:52.955589 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-23 00:13:53.012415 | orchestrator | ok: [testbed-manager] 2025-07-23 00:13:53.012554 | orchestrator | 2025-07-23 00:13:53.012582 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-23 00:13:53.145809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-23 00:13:53.145954 | orchestrator | 2025-07-23 00:13:53.145980 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-23 00:13:56.062436 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-23 00:13:56.062630 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-23 00:13:56.062647 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-23 00:13:56.062660 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-23 00:13:56.062671 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-23 00:13:56.062682 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-23 00:13:56.062693 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-23 00:13:56.062704 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-23 00:13:56.062715 | orchestrator | 2025-07-23 00:13:56.062727 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-23 00:13:56.739903 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:56.740040 | orchestrator | 2025-07-23 00:13:56.740068 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-23 00:13:57.416430 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:57.416585 | orchestrator | 2025-07-23 00:13:57.416605 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-23 00:13:57.482812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-23 00:13:57.482911 | orchestrator | 2025-07-23 00:13:57.482926 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-23 00:13:58.744638 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-23 00:13:58.744744 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-23 00:13:58.744760 | orchestrator | 2025-07-23 00:13:58.744773 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-23 00:13:59.417308 | orchestrator | changed: [testbed-manager] 2025-07-23 00:13:59.417416 | orchestrator | 2025-07-23 00:13:59.417433 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-23 00:13:59.485695 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:13:59.485786 | orchestrator | 2025-07-23 00:13:59.485802 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-23 00:13:59.552031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-23 00:13:59.552125 | orchestrator | 2025-07-23 00:13:59.552141 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-23 00:14:00.928125 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-23 00:14:00.928256 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-23 00:14:00.928283 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:00.928304 | orchestrator | 2025-07-23 00:14:00.928323 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-23 00:14:01.582965 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:01.583077 | orchestrator | 2025-07-23 00:14:01.583095 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-23 00:14:01.640744 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:14:01.640850 | orchestrator | 2025-07-23 00:14:01.640873 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-23 00:14:01.745648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-23 00:14:01.745739 | orchestrator | 2025-07-23 00:14:01.745753 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-23 00:14:02.309156 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:02.309264 | orchestrator | 2025-07-23 00:14:02.309282 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-23 00:14:02.740831 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:02.740934 | orchestrator | 2025-07-23 00:14:02.740952 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-23 00:14:04.013051 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-23 00:14:04.013158 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-23 00:14:04.013175 | orchestrator | 2025-07-23 00:14:04.013188 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-23 00:14:04.653503 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:04.653665 | orchestrator | 2025-07-23 00:14:04.653682 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-23 00:14:05.068997 | orchestrator | ok: [testbed-manager] 2025-07-23 00:14:05.069104 | orchestrator | 2025-07-23 00:14:05.069114 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-23 00:14:05.428741 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:05.428871 | orchestrator | 2025-07-23 00:14:05.428886 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-23 00:14:05.469815 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:14:05.469897 | orchestrator | 2025-07-23 00:14:05.469910 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-23 00:14:05.539700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-23 00:14:05.539785 | orchestrator | 2025-07-23 00:14:05.539798 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-23 00:14:05.582521 | orchestrator | ok: [testbed-manager] 2025-07-23 00:14:05.582616 | orchestrator | 2025-07-23 00:14:05.582629 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-23 00:14:07.718088 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-23 00:14:07.718226 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-23 00:14:07.718246 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-23 00:14:07.718259 | orchestrator | 2025-07-23 00:14:07.718272 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-23 00:14:08.409196 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:08.409300 | orchestrator | 2025-07-23 00:14:08.409318 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-23 00:14:09.146103 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:09.146212 | orchestrator | 2025-07-23 00:14:09.146229 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-23 00:14:09.875578 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:09.875680 | orchestrator | 2025-07-23 00:14:09.875697 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-23 00:14:09.939907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-23 00:14:09.939999 | orchestrator | 2025-07-23 00:14:09.940012 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-23 00:14:09.980614 | orchestrator | ok: [testbed-manager] 2025-07-23 00:14:09.980687 | orchestrator | 2025-07-23 00:14:09.980700 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-23 00:14:10.700234 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-23 00:14:10.700340 | orchestrator | 2025-07-23 00:14:10.700358 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-23 00:14:10.788926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-23 00:14:10.789020 | orchestrator | 2025-07-23 00:14:10.789035 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-23 00:14:11.526228 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:11.526335 | orchestrator | 2025-07-23 00:14:11.526352 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-23 00:14:12.176673 | orchestrator | ok: [testbed-manager] 2025-07-23 00:14:12.176779 | orchestrator | 2025-07-23 00:14:12.176796 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-23 00:14:12.228967 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:14:12.229058 | orchestrator | 2025-07-23 00:14:12.229075 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-23 00:14:12.285814 | orchestrator | ok: [testbed-manager] 2025-07-23 00:14:12.285908 | orchestrator | 2025-07-23 00:14:12.285924 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-23 00:14:13.119205 | orchestrator | changed: [testbed-manager] 2025-07-23 00:14:13.119428 | orchestrator | 2025-07-23 00:14:13.119449 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-23 00:15:23.113016 | orchestrator | changed: [testbed-manager] 2025-07-23 00:15:23.113141 | orchestrator | 2025-07-23 00:15:23.113176 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-23 00:15:24.129857 | orchestrator | ok: [testbed-manager] 2025-07-23 00:15:24.129965 | orchestrator | 2025-07-23 00:15:24.129985 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-23 00:15:24.183092 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:15:24.183199 | orchestrator | 2025-07-23 00:15:24.183218 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-23 00:15:26.873496 | orchestrator | changed: [testbed-manager] 2025-07-23 00:15:26.873707 | orchestrator | 2025-07-23 00:15:26.873728 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-23 00:15:26.925415 | orchestrator | ok: [testbed-manager] 2025-07-23 00:15:26.925511 | orchestrator | 2025-07-23 00:15:26.925572 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-23 00:15:26.925596 | orchestrator | 2025-07-23 00:15:26.925614 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-23 00:15:26.979056 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:15:26.979150 | orchestrator | 2025-07-23 00:15:26.979164 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-23 00:16:27.031956 | orchestrator | Pausing for 60 seconds 2025-07-23 00:16:27.032082 | orchestrator | changed: [testbed-manager] 2025-07-23 00:16:27.032099 | orchestrator | 2025-07-23 00:16:27.032112 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-23 00:16:31.197749 | orchestrator | changed: [testbed-manager] 2025-07-23 00:16:31.197844 | orchestrator | 2025-07-23 00:16:31.197861 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-23 00:17:13.058642 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-23 00:17:13.058766 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-23 00:17:13.058782 | orchestrator | changed: [testbed-manager] 2025-07-23 00:17:13.058797 | orchestrator | 2025-07-23 00:17:13.058810 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-23 00:17:22.845236 | orchestrator | changed: [testbed-manager] 2025-07-23 00:17:22.845368 | orchestrator | 2025-07-23 00:17:22.845388 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-23 00:17:22.938398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-23 00:17:22.938516 | orchestrator | 2025-07-23 00:17:22.938582 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-23 00:17:22.938605 | orchestrator | 2025-07-23 00:17:22.938623 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-23 00:17:22.990107 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:17:22.990188 | orchestrator | 2025-07-23 00:17:22.990206 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-23 00:17:22.990220 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-23 00:17:22.990231 | orchestrator | 2025-07-23 00:17:23.057017 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-23 00:17:23.057145 | orchestrator | + deactivate 2025-07-23 00:17:23.057163 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-23 00:17:23.057178 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-23 00:17:23.057189 | orchestrator | + export PATH 2025-07-23 00:17:23.057201 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-23 00:17:23.057213 | orchestrator | + '[' -n '' ']' 2025-07-23 00:17:23.057224 | orchestrator | + hash -r 2025-07-23 00:17:23.057235 | orchestrator | + '[' -n '' ']' 2025-07-23 00:17:23.057246 | orchestrator | + unset VIRTUAL_ENV 2025-07-23 00:17:23.057257 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-23 00:17:23.057291 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-23 00:17:23.057303 | orchestrator | + unset -f deactivate 2025-07-23 00:17:23.057315 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-23 00:17:23.064373 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-23 00:17:23.064401 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-23 00:17:23.064413 | orchestrator | + local max_attempts=60 2025-07-23 00:17:23.064424 | orchestrator | + local name=ceph-ansible 2025-07-23 00:17:23.064436 | orchestrator | + local attempt_num=1 2025-07-23 00:17:23.065389 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-23 00:17:23.097693 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-23 00:17:23.097843 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-23 00:17:23.097862 | orchestrator | + local max_attempts=60 2025-07-23 00:17:23.097875 | orchestrator | + local name=kolla-ansible 2025-07-23 00:17:23.097887 | orchestrator | + local attempt_num=1 2025-07-23 00:17:23.098751 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-23 00:17:23.134434 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-23 00:17:23.134510 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-23 00:17:23.134554 | orchestrator | + local max_attempts=60 2025-07-23 00:17:23.134569 | orchestrator | + local name=osism-ansible 2025-07-23 00:17:23.134579 | orchestrator | + local attempt_num=1 2025-07-23 00:17:23.135090 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-23 00:17:23.167918 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-23 00:17:23.168001 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-23 00:17:23.168017 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-23 00:17:23.834698 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-23 00:17:24.021716 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-23 00:17:24.021832 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.021849 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.021860 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-23 00:17:24.021872 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-23 00:17:24.021913 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.021924 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.021934 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-23 00:17:24.021944 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.021953 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-23 00:17:24.021963 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.021973 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-23 00:17:24.021982 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.021992 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.022002 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-23 00:17:24.028598 | orchestrator | ++ semver latest 7.0.0 2025-07-23 00:17:24.086329 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-23 00:17:24.086419 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-23 00:17:24.086435 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-23 00:17:24.091500 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-23 00:17:36.097618 | orchestrator | 2025-07-23 00:17:36 | INFO  | Task a02c67c6-657a-4e8d-9938-d3793ad06475 (resolvconf) was prepared for execution. 2025-07-23 00:17:36.097731 | orchestrator | 2025-07-23 00:17:36 | INFO  | It takes a moment until task a02c67c6-657a-4e8d-9938-d3793ad06475 (resolvconf) has been started and output is visible here. 2025-07-23 00:17:55.528011 | orchestrator | 2025-07-23 00:17:55.528147 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-23 00:17:55.528173 | orchestrator | 2025-07-23 00:17:55.528194 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-23 00:17:55.528215 | orchestrator | Wednesday 23 July 2025 00:17:42 +0000 (0:00:00.156) 0:00:00.156 ******** 2025-07-23 00:17:55.528231 | orchestrator | ok: [testbed-manager] 2025-07-23 00:17:55.528251 | orchestrator | 2025-07-23 00:17:55.528268 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-23 00:17:55.528289 | orchestrator | Wednesday 23 July 2025 00:17:46 +0000 (0:00:04.262) 0:00:04.419 ******** 2025-07-23 00:17:55.528308 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:17:55.528329 | orchestrator | 2025-07-23 00:17:55.528429 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-23 00:17:55.528443 | orchestrator | Wednesday 23 July 2025 00:17:46 +0000 (0:00:00.066) 0:00:04.485 ******** 2025-07-23 00:17:55.528481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-23 00:17:55.528564 | orchestrator | 2025-07-23 00:17:55.528585 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-23 00:17:55.528606 | orchestrator | Wednesday 23 July 2025 00:17:46 +0000 (0:00:00.086) 0:00:04.572 ******** 2025-07-23 00:17:55.528628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-23 00:17:55.528647 | orchestrator | 2025-07-23 00:17:55.528665 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-23 00:17:55.528678 | orchestrator | Wednesday 23 July 2025 00:17:46 +0000 (0:00:00.095) 0:00:04.668 ******** 2025-07-23 00:17:55.528691 | orchestrator | ok: [testbed-manager] 2025-07-23 00:17:55.528704 | orchestrator | 2025-07-23 00:17:55.528717 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-23 00:17:55.528728 | orchestrator | Wednesday 23 July 2025 00:17:48 +0000 (0:00:01.576) 0:00:06.244 ******** 2025-07-23 00:17:55.528739 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:17:55.528750 | orchestrator | 2025-07-23 00:17:55.528764 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-23 00:17:55.528784 | orchestrator | Wednesday 23 July 2025 00:17:48 +0000 (0:00:00.055) 0:00:06.300 ******** 2025-07-23 00:17:55.528803 | orchestrator | ok: [testbed-manager] 2025-07-23 00:17:55.528822 | orchestrator | 2025-07-23 00:17:55.528841 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-23 00:17:55.528861 | orchestrator | Wednesday 23 July 2025 00:17:49 +0000 (0:00:00.756) 0:00:07.057 ******** 2025-07-23 00:17:55.528881 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:17:55.528899 | orchestrator | 2025-07-23 00:17:55.528918 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-23 00:17:55.528932 | orchestrator | Wednesday 23 July 2025 00:17:49 +0000 (0:00:00.085) 0:00:07.142 ******** 2025-07-23 00:17:55.528943 | orchestrator | changed: [testbed-manager] 2025-07-23 00:17:55.528954 | orchestrator | 2025-07-23 00:17:55.528965 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-23 00:17:55.528976 | orchestrator | Wednesday 23 July 2025 00:17:50 +0000 (0:00:00.979) 0:00:08.121 ******** 2025-07-23 00:17:55.528987 | orchestrator | changed: [testbed-manager] 2025-07-23 00:17:55.528997 | orchestrator | 2025-07-23 00:17:55.529010 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-23 00:17:55.529029 | orchestrator | Wednesday 23 July 2025 00:17:51 +0000 (0:00:01.673) 0:00:09.795 ******** 2025-07-23 00:17:55.529048 | orchestrator | ok: [testbed-manager] 2025-07-23 00:17:55.529067 | orchestrator | 2025-07-23 00:17:55.529087 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-23 00:17:55.529105 | orchestrator | Wednesday 23 July 2025 00:17:53 +0000 (0:00:01.400) 0:00:11.195 ******** 2025-07-23 00:17:55.529197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-23 00:17:55.529218 | orchestrator | 2025-07-23 00:17:55.529253 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-23 00:17:55.529266 | orchestrator | Wednesday 23 July 2025 00:17:53 +0000 (0:00:00.085) 0:00:11.281 ******** 2025-07-23 00:17:55.529277 | orchestrator | changed: [testbed-manager] 2025-07-23 00:17:55.529288 | orchestrator | 2025-07-23 00:17:55.529299 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-23 00:17:55.529311 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-23 00:17:55.529322 | orchestrator | 2025-07-23 00:17:55.529333 | orchestrator | 2025-07-23 00:17:55.529344 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-23 00:17:55.529367 | orchestrator | Wednesday 23 July 2025 00:17:54 +0000 (0:00:01.649) 0:00:12.930 ******** 2025-07-23 00:17:55.529378 | orchestrator | =============================================================================== 2025-07-23 00:17:55.529390 | orchestrator | Gathering Facts --------------------------------------------------------- 4.26s 2025-07-23 00:17:55.529400 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.67s 2025-07-23 00:17:55.529411 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.65s 2025-07-23 00:17:55.529422 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.58s 2025-07-23 00:17:55.529433 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.40s 2025-07-23 00:17:55.529445 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.98s 2025-07-23 00:17:55.529550 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.76s 2025-07-23 00:17:55.529567 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2025-07-23 00:17:55.529579 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-07-23 00:17:55.529590 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-23 00:17:55.529601 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-07-23 00:17:55.529612 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-23 00:17:55.529623 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-07-23 00:17:55.802100 | orchestrator | + osism apply sshconfig 2025-07-23 00:18:07.737093 | orchestrator | 2025-07-23 00:18:07 | INFO  | Task b5b7c9d3-d240-4853-8ec2-1f18effe0b47 (sshconfig) was prepared for execution. 2025-07-23 00:18:07.737208 | orchestrator | 2025-07-23 00:18:07 | INFO  | It takes a moment until task b5b7c9d3-d240-4853-8ec2-1f18effe0b47 (sshconfig) has been started and output is visible here. 2025-07-23 00:18:24.740565 | orchestrator | 2025-07-23 00:18:24.740683 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-23 00:18:24.740699 | orchestrator | 2025-07-23 00:18:24.740711 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-23 00:18:24.740723 | orchestrator | Wednesday 23 July 2025 00:18:13 +0000 (0:00:00.150) 0:00:00.150 ******** 2025-07-23 00:18:24.740735 | orchestrator | ok: [testbed-manager] 2025-07-23 00:18:24.740748 | orchestrator | 2025-07-23 00:18:24.740759 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-23 00:18:24.740770 | orchestrator | Wednesday 23 July 2025 00:18:14 +0000 (0:00:00.759) 0:00:00.910 ******** 2025-07-23 00:18:24.740781 | orchestrator | changed: [testbed-manager] 2025-07-23 00:18:24.740793 | orchestrator | 2025-07-23 00:18:24.740804 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-23 00:18:24.740816 | orchestrator | Wednesday 23 July 2025 00:18:15 +0000 (0:00:00.946) 0:00:01.856 ******** 2025-07-23 00:18:24.740827 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-23 00:18:24.740838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-23 00:18:24.740849 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-23 00:18:24.740861 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-23 00:18:24.740871 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-23 00:18:24.740883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-23 00:18:24.740917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-23 00:18:24.740929 | orchestrator | 2025-07-23 00:18:24.740940 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-23 00:18:24.740951 | orchestrator | Wednesday 23 July 2025 00:18:23 +0000 (0:00:07.874) 0:00:09.731 ******** 2025-07-23 00:18:24.740986 | orchestrator | skipping: [testbed-manager] 2025-07-23 00:18:24.740998 | orchestrator | 2025-07-23 00:18:24.741009 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-23 00:18:24.741020 | orchestrator | Wednesday 23 July 2025 00:18:23 +0000 (0:00:00.061) 0:00:09.793 ******** 2025-07-23 00:18:24.741031 | orchestrator | changed: [testbed-manager] 2025-07-23 00:18:24.741042 | orchestrator | 2025-07-23 00:18:24.741053 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-23 00:18:24.741066 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-23 00:18:24.741080 | orchestrator | 2025-07-23 00:18:24.741092 | orchestrator | 2025-07-23 00:18:24.741105 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-23 00:18:24.741118 | orchestrator | Wednesday 23 July 2025 00:18:24 +0000 (0:00:00.736) 0:00:10.530 ******** 2025-07-23 00:18:24.741130 | orchestrator | =============================================================================== 2025-07-23 00:18:24.741143 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 7.87s 2025-07-23 00:18:24.741156 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.95s 2025-07-23 00:18:24.741168 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.76s 2025-07-23 00:18:24.741180 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.74s 2025-07-23 00:18:24.741193 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-07-23 00:18:25.007837 | orchestrator | + osism apply known-hosts 2025-07-23 00:18:36.956361 | orchestrator | 2025-07-23 00:18:36 | INFO  | Task 3d30f15d-e5ac-47f5-8fe4-e2b3496fa44f (known-hosts) was prepared for execution. 2025-07-23 00:18:36.956526 | orchestrator | 2025-07-23 00:18:36 | INFO  | It takes a moment until task 3d30f15d-e5ac-47f5-8fe4-e2b3496fa44f (known-hosts) has been started and output is visible here. 2025-07-23 00:18:51.072669 | orchestrator | 2025-07-23 00:18:51 | INFO  | Task 5a9c3eec-42ce-4e4b-845e-f391242b8b2f (known-hosts) was prepared for execution. 2025-07-23 00:18:51.072781 | orchestrator | 2025-07-23 00:18:51 | INFO  | It takes a moment until task 5a9c3eec-42ce-4e4b-845e-f391242b8b2f (known-hosts) has been started and output is visible here. 2025-07-23 00:19:03.354399 | orchestrator | 2025-07-23 00:19:03.354509 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-23 00:19:03.354526 | orchestrator | 2025-07-23 00:19:03.354538 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-23 00:19:03.354552 | orchestrator | Wednesday 23 July 2025 00:18:43 +0000 (0:00:00.152) 0:00:00.152 ******** 2025-07-23 00:19:03.354564 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-23 00:19:03.354576 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-23 00:19:03.354587 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-23 00:19:03.354598 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-23 00:19:03.354609 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-23 00:19:03.354620 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-23 00:19:03.354631 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-23 00:19:03.354642 | orchestrator | 2025-07-23 00:19:03.354653 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-23 00:19:03.354666 | orchestrator | Wednesday 23 July 2025 00:18:50 +0000 (0:00:07.197) 0:00:07.349 ******** 2025-07-23 00:19:03.354678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-23 00:19:03.354691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-23 00:19:03.354727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-23 00:19:03.354749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-23 00:19:03.354761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-23 00:19:03.354772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-23 00:19:03.354783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-23 00:19:03.354794 | orchestrator | 2025-07-23 00:19:03.354806 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-23 00:19:03.354817 | orchestrator | Wednesday 23 July 2025 00:18:50 +0000 (0:00:00.175) 0:00:07.525 ******** 2025-07-23 00:19:03.354829 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-23 00:19:03.354842 | orchestrator |  2025-07-23 00:19:03.354854 | orchestrator | Task failed. 2025-07-23 00:19:03.354867 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-23 00:19:03.354881 | orchestrator |  2025-07-23 00:19:03.354894 | orchestrator | 1 --- 2025-07-23 00:19:03.354907 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-23 00:19:03.354920 | orchestrator |  ^ column 3 2025-07-23 00:19:03.354933 | orchestrator |  2025-07-23 00:19:03.354945 | orchestrator | <<< caused by >>> 2025-07-23 00:19:03.354958 | orchestrator |  2025-07-23 00:19:03.354972 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-23 00:19:03.354985 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-23 00:19:03.354998 | orchestrator |  2025-07-23 00:19:03.355011 | orchestrator | 10 when: 2025-07-23 00:19:03.355024 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-23 00:19:03.355037 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-23 00:19:03.355050 | orchestrator |  ^ column 7 2025-07-23 00:19:03.355062 | orchestrator |  2025-07-23 00:19:03.355075 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-23 00:19:03.355088 | orchestrator |  2025-07-23 00:19:03.355102 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLahydocmrqF+H6j+2NalV6p2ljzLSnccbZpKTcqjBv3Z69iRCeB/Pq+tYlXd4zxpFaIwqn+fs55JZd/rc/7zB8=) => changed=false  2025-07-23 00:19:03.355117 | orchestrator |  ansible_loop_var: inner_item 2025-07-23 00:19:03.355130 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLahydocmrqF+H6j+2NalV6p2ljzLSnccbZpKTcqjBv3Z69iRCeB/Pq+tYlXd4zxpFaIwqn+fs55JZd/rc/7zB8= 2025-07-23 00:19:03.355144 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-23 00:19:03.355180 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsKavYsjm016NJmTbhBlVb88ssPHh3P3ihg6vBmOEzGj4jbKCJblK/O5qKXkNhjAzlgRuGxvLSvghWK8b2ZVzRuLG5D4zo2N5IeowfUoXtwld1tW8IpB4NSiOmA3WII25/EB/9o+cFb1gvf5Sgj/eB8VQYoK6LKGfiPRs4B7NPc/dsR8zXdTU6IYaiSy4oyXYjXp0zq5RyPT8GswFCUKpYBxjAR2wnfRYdZD35cv3UaIapiO/Bm0EtwIatq0t2uMU3HeFp86uDvwM74R7N9Vjhpc6kiMs1YwsX0C0wQhwuLZCaZ6dSqwx3yFeV+0ScxvVznXcOeOql9ub7XQ9SUrVage/kaaeZDKZEP9M7G3EVqlu8UIoZBk4kM0ajDCQ7KH20EH1aVolRrHXQfnDbAsaeWneAUkruSjc2ZYHE0n5XYwgNzvvxZ1JQ0zQJAXfNQmClwuRK8oiUwiYV4AWwwzC4950zfXiWcuXDCenVRbzE0ynCPrkfSWhFE2UyGZbmvyk=) => changed=false  2025-07-23 00:19:03.355208 | orchestrator |  ansible_loop_var: inner_item 2025-07-23 00:19:03.355223 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsKavYsjm016NJmTbhBlVb88ssPHh3P3ihg6vBmOEzGj4jbKCJblK/O5qKXkNhjAzlgRuGxvLSvghWK8b2ZVzRuLG5D4zo2N5IeowfUoXtwld1tW8IpB4NSiOmA3WII25/EB/9o+cFb1gvf5Sgj/eB8VQYoK6LKGfiPRs4B7NPc/dsR8zXdTU6IYaiSy4oyXYjXp0zq5RyPT8GswFCUKpYBxjAR2wnfRYdZD35cv3UaIapiO/Bm0EtwIatq0t2uMU3HeFp86uDvwM74R7N9Vjhpc6kiMs1YwsX0C0wQhwuLZCaZ6dSqwx3yFeV+0ScxvVznXcOeOql9ub7XQ9SUrVage/kaaeZDKZEP9M7G3EVqlu8UIoZBk4kM0ajDCQ7KH20EH1aVolRrHXQfnDbAsaeWneAUkruSjc2ZYHE0n5XYwgNzvvxZ1JQ0zQJAXfNQmClwuRK8oiUwiYV4AWwwzC4950zfXiWcuXDCenVRbzE0ynCPrkfSWhFE2UyGZbmvyk= 2025-07-23 00:19:03.355237 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-23 00:19:03.355312 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICU7tgFbJAxGM+ycqdRBNviKoa+/Mkpl4xBAFwg9oP4P) => changed=false  2025-07-23 00:19:03.355325 | orchestrator |  ansible_loop_var: inner_item 2025-07-23 00:19:03.355352 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICU7tgFbJAxGM+ycqdRBNviKoa+/Mkpl4xBAFwg9oP4P 2025-07-23 00:19:03.355365 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-23 00:19:03.355376 | orchestrator | 2025-07-23 00:19:03.355387 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-23 00:19:03.355399 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-23 00:19:03.355410 | orchestrator | 2025-07-23 00:19:03.355421 | orchestrator | 2025-07-23 00:19:03.355432 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-23 00:19:03.355443 | orchestrator | Wednesday 23 July 2025 00:18:50 +0000 (0:00:00.103) 0:00:07.629 ******** 2025-07-23 00:19:03.355454 | orchestrator | =============================================================================== 2025-07-23 00:19:03.355465 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 7.20s 2025-07-23 00:19:03.355476 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-07-23 00:19:03.355487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-23 00:19:03.355498 | orchestrator | 2025-07-23 00:19:03.355509 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-23 00:19:03.355520 | orchestrator | 2025-07-23 00:19:03.355530 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-23 00:19:03.355541 | orchestrator | Wednesday 23 July 2025 00:18:56 +0000 (0:00:00.125) 0:00:00.125 ******** 2025-07-23 00:19:03.355552 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-23 00:19:03.355562 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-23 00:19:03.355573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-23 00:19:03.355584 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-23 00:19:03.355595 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-23 00:19:03.355605 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-23 00:19:03.355616 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-23 00:19:03.355627 | orchestrator | 2025-07-23 00:19:03.355638 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-23 00:19:03.355648 | orchestrator | Wednesday 23 July 2025 00:19:03 +0000 (0:00:06.335) 0:00:06.461 ******** 2025-07-23 00:19:03.355664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-23 00:19:03.355676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-23 00:19:03.355687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-23 00:19:03.355706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-23 00:19:03.961236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-23 00:19:03.961369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-23 00:19:03.961387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-23 00:19:03.961400 | orchestrator | 2025-07-23 00:19:03.961413 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-23 00:19:03.961425 | orchestrator | Wednesday 23 July 2025 00:19:03 +0000 (0:00:00.198) 0:00:06.659 ******** 2025-07-23 00:19:03.961437 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-23 00:19:03.961450 | orchestrator |  2025-07-23 00:19:03.961462 | orchestrator | Task failed. 2025-07-23 00:19:03.961474 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-23 00:19:03.961486 | orchestrator |  2025-07-23 00:19:03.961497 | orchestrator | 1 --- 2025-07-23 00:19:03.961508 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-23 00:19:03.961520 | orchestrator |  ^ column 3 2025-07-23 00:19:03.961531 | orchestrator |  2025-07-23 00:19:03.961542 | orchestrator | <<< caused by >>> 2025-07-23 00:19:03.961553 | orchestrator |  2025-07-23 00:19:03.961566 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-23 00:19:03.961577 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-23 00:19:03.961588 | orchestrator |  2025-07-23 00:19:03.961599 | orchestrator | 10 when: 2025-07-23 00:19:03.961611 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-23 00:19:03.961623 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-23 00:19:03.961634 | orchestrator |  ^ column 7 2025-07-23 00:19:03.961645 | orchestrator |  2025-07-23 00:19:03.961675 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-23 00:19:03.961687 | orchestrator |  2025-07-23 00:19:03.961699 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICU7tgFbJAxGM+ycqdRBNviKoa+/Mkpl4xBAFwg9oP4P) => changed=false  2025-07-23 00:19:03.961711 | orchestrator |  ansible_loop_var: inner_item 2025-07-23 00:19:03.961722 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICU7tgFbJAxGM+ycqdRBNviKoa+/Mkpl4xBAFwg9oP4P 2025-07-23 00:19:03.961733 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-23 00:19:03.961771 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsKavYsjm016NJmTbhBlVb88ssPHh3P3ihg6vBmOEzGj4jbKCJblK/O5qKXkNhjAzlgRuGxvLSvghWK8b2ZVzRuLG5D4zo2N5IeowfUoXtwld1tW8IpB4NSiOmA3WII25/EB/9o+cFb1gvf5Sgj/eB8VQYoK6LKGfiPRs4B7NPc/dsR8zXdTU6IYaiSy4oyXYjXp0zq5RyPT8GswFCUKpYBxjAR2wnfRYdZD35cv3UaIapiO/Bm0EtwIatq0t2uMU3HeFp86uDvwM74R7N9Vjhpc6kiMs1YwsX0C0wQhwuLZCaZ6dSqwx3yFeV+0ScxvVznXcOeOql9ub7XQ9SUrVage/kaaeZDKZEP9M7G3EVqlu8UIoZBk4kM0ajDCQ7KH20EH1aVolRrHXQfnDbAsaeWneAUkruSjc2ZYHE0n5XYwgNzvvxZ1JQ0zQJAXfNQmClwuRK8oiUwiYV4AWwwzC4950zfXiWcuXDCenVRbzE0ynCPrkfSWhFE2UyGZbmvyk=) => changed=false  2025-07-23 00:19:03.961788 | orchestrator |  ansible_loop_var: inner_item 2025-07-23 00:19:03.961802 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsKavYsjm016NJmTbhBlVb88ssPHh3P3ihg6vBmOEzGj4jbKCJblK/O5qKXkNhjAzlgRuGxvLSvghWK8b2ZVzRuLG5D4zo2N5IeowfUoXtwld1tW8IpB4NSiOmA3WII25/EB/9o+cFb1gvf5Sgj/eB8VQYoK6LKGfiPRs4B7NPc/dsR8zXdTU6IYaiSy4oyXYjXp0zq5RyPT8GswFCUKpYBxjAR2wnfRYdZD35cv3UaIapiO/Bm0EtwIatq0t2uMU3HeFp86uDvwM74R7N9Vjhpc6kiMs1YwsX0C0wQhwuLZCaZ6dSqwx3yFeV+0ScxvVznXcOeOql9ub7XQ9SUrVage/kaaeZDKZEP9M7G3EVqlu8UIoZBk4kM0ajDCQ7KH20EH1aVolRrHXQfnDbAsaeWneAUkruSjc2ZYHE0n5XYwgNzvvxZ1JQ0zQJAXfNQmClwuRK8oiUwiYV4AWwwzC4950zfXiWcuXDCenVRbzE0ynCPrkfSWhFE2UyGZbmvyk= 2025-07-23 00:19:03.961816 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-23 00:19:03.961829 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLahydocmrqF+H6j+2NalV6p2ljzLSnccbZpKTcqjBv3Z69iRCeB/Pq+tYlXd4zxpFaIwqn+fs55JZd/rc/7zB8=) => changed=false  2025-07-23 00:19:03.961844 | orchestrator |  ansible_loop_var: inner_item 2025-07-23 00:19:03.961872 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLahydocmrqF+H6j+2NalV6p2ljzLSnccbZpKTcqjBv3Z69iRCeB/Pq+tYlXd4zxpFaIwqn+fs55JZd/rc/7zB8= 2025-07-23 00:19:03.961886 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-23 00:19:03.961899 | orchestrator | 2025-07-23 00:19:03.961912 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-23 00:19:03.961924 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-23 00:19:03.961937 | orchestrator | 2025-07-23 00:19:03.961950 | orchestrator | 2025-07-23 00:19:03.961963 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-23 00:19:03.961976 | orchestrator | Wednesday 23 July 2025 00:19:03 +0000 (0:00:00.099) 0:00:06.759 ******** 2025-07-23 00:19:03.961989 | orchestrator | =============================================================================== 2025-07-23 00:19:03.962001 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.34s 2025-07-23 00:19:03.962014 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.20s 2025-07-23 00:19:03.962090 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-23 00:19:04.497667 | orchestrator | ERROR 2025-07-23 00:19:04.498200 | orchestrator | { 2025-07-23 00:19:04.498324 | orchestrator | "delta": "0:05:50.313939", 2025-07-23 00:19:04.498400 | orchestrator | "end": "2025-07-23 00:19:04.229622", 2025-07-23 00:19:04.498465 | orchestrator | "msg": "non-zero return code", 2025-07-23 00:19:04.498523 | orchestrator | "rc": 2, 2025-07-23 00:19:04.498575 | orchestrator | "start": "2025-07-23 00:13:13.915683" 2025-07-23 00:19:04.498643 | orchestrator | } failure 2025-07-23 00:19:04.520077 | 2025-07-23 00:19:04.520186 | PLAY RECAP 2025-07-23 00:19:04.520254 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-07-23 00:19:04.520286 | 2025-07-23 00:19:04.666450 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-23 00:19:04.667543 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-23 00:19:05.407799 | 2025-07-23 00:19:05.407997 | PLAY [Post output play] 2025-07-23 00:19:05.424412 | 2025-07-23 00:19:05.424557 | LOOP [stage-output : Register sources] 2025-07-23 00:19:05.493592 | 2025-07-23 00:19:05.493894 | TASK [stage-output : Check sudo] 2025-07-23 00:19:06.632959 | orchestrator | sudo: a password is required 2025-07-23 00:19:07.031583 | orchestrator | ok: Runtime: 0:00:00.292557 2025-07-23 00:19:07.047118 | 2025-07-23 00:19:07.047288 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-23 00:19:07.083169 | 2025-07-23 00:19:07.083432 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-23 00:19:07.163394 | orchestrator | ok 2025-07-23 00:19:07.173455 | 2025-07-23 00:19:07.173604 | LOOP [stage-output : Ensure target folders exist] 2025-07-23 00:19:07.628337 | orchestrator | ok: "docs" 2025-07-23 00:19:07.628666 | 2025-07-23 00:19:07.877473 | orchestrator | ok: "artifacts" 2025-07-23 00:19:08.157389 | orchestrator | ok: "logs" 2025-07-23 00:19:08.176407 | 2025-07-23 00:19:08.176599 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-23 00:19:08.216069 | 2025-07-23 00:19:08.216373 | TASK [stage-output : Make all log files readable] 2025-07-23 00:19:08.509542 | orchestrator | ok 2025-07-23 00:19:08.519187 | 2025-07-23 00:19:08.519320 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-23 00:19:08.554063 | orchestrator | skipping: Conditional result was False 2025-07-23 00:19:08.570314 | 2025-07-23 00:19:08.570490 | TASK [stage-output : Discover log files for compression] 2025-07-23 00:19:08.595480 | orchestrator | skipping: Conditional result was False 2025-07-23 00:19:08.609748 | 2025-07-23 00:19:08.609965 | LOOP [stage-output : Archive everything from logs] 2025-07-23 00:19:08.656048 | 2025-07-23 00:19:08.656230 | PLAY [Post cleanup play] 2025-07-23 00:19:08.664668 | 2025-07-23 00:19:08.664780 | TASK [Set cloud fact (Zuul deployment)] 2025-07-23 00:19:08.731547 | orchestrator | ok 2025-07-23 00:19:08.742784 | 2025-07-23 00:19:08.742982 | TASK [Set cloud fact (local deployment)] 2025-07-23 00:19:08.776847 | orchestrator | skipping: Conditional result was False 2025-07-23 00:19:08.788782 | 2025-07-23 00:19:08.788932 | TASK [Clean the cloud environment] 2025-07-23 00:19:09.400180 | orchestrator | 2025-07-23 00:19:09 - clean up servers 2025-07-23 00:19:10.150246 | orchestrator | 2025-07-23 00:19:10 - testbed-manager 2025-07-23 00:19:10.237482 | orchestrator | 2025-07-23 00:19:10 - testbed-node-5 2025-07-23 00:19:10.331363 | orchestrator | 2025-07-23 00:19:10 - testbed-node-4 2025-07-23 00:19:10.418499 | orchestrator | 2025-07-23 00:19:10 - testbed-node-0 2025-07-23 00:19:10.514580 | orchestrator | 2025-07-23 00:19:10 - testbed-node-3 2025-07-23 00:19:10.619952 | orchestrator | 2025-07-23 00:19:10 - testbed-node-1 2025-07-23 00:19:10.709365 | orchestrator | 2025-07-23 00:19:10 - testbed-node-2 2025-07-23 00:19:10.801288 | orchestrator | 2025-07-23 00:19:10 - clean up keypairs 2025-07-23 00:19:10.819203 | orchestrator | 2025-07-23 00:19:10 - testbed 2025-07-23 00:19:10.845255 | orchestrator | 2025-07-23 00:19:10 - wait for servers to be gone 2025-07-23 00:19:21.824408 | orchestrator | 2025-07-23 00:19:21 - clean up ports 2025-07-23 00:19:22.036005 | orchestrator | 2025-07-23 00:19:22 - 0ca415bc-f103-4c23-9fbe-4c80a75904c2 2025-07-23 00:19:22.520754 | orchestrator | 2025-07-23 00:19:22 - 33000d2c-354d-4535-b86b-d759e2817ce2 2025-07-23 00:19:22.805453 | orchestrator | 2025-07-23 00:19:22 - 5f8424f7-c0e2-41d3-bc1b-328dfd7c8efd 2025-07-23 00:19:23.096213 | orchestrator | 2025-07-23 00:19:23 - 817163fb-9b7d-4329-ad69-9c7a329c1ff3 2025-07-23 00:19:23.312636 | orchestrator | 2025-07-23 00:19:23 - 8f11eba0-337b-470b-9927-f4644db8953a 2025-07-23 00:19:23.516189 | orchestrator | 2025-07-23 00:19:23 - 903884a7-9ef8-402f-a808-339c560181db 2025-07-23 00:19:23.728411 | orchestrator | 2025-07-23 00:19:23 - a8bb3380-49d6-41ec-a8f9-b6a2d68929be 2025-07-23 00:19:23.929431 | orchestrator | 2025-07-23 00:19:23 - clean up volumes 2025-07-23 00:19:24.058662 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-5-node-base 2025-07-23 00:19:24.098653 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-manager-base 2025-07-23 00:19:24.139752 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-4-node-base 2025-07-23 00:19:24.185003 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-3-node-base 2025-07-23 00:19:24.228867 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-2-node-base 2025-07-23 00:19:24.277085 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-1-node-base 2025-07-23 00:19:24.318838 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-0-node-base 2025-07-23 00:19:24.365051 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-1-node-4 2025-07-23 00:19:24.410925 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-3-node-3 2025-07-23 00:19:24.458704 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-0-node-3 2025-07-23 00:19:24.504954 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-7-node-4 2025-07-23 00:19:24.549231 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-6-node-3 2025-07-23 00:19:24.589986 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-4-node-4 2025-07-23 00:19:24.631029 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-8-node-5 2025-07-23 00:19:24.673026 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-5-node-5 2025-07-23 00:19:24.718370 | orchestrator | 2025-07-23 00:19:24 - testbed-volume-2-node-5 2025-07-23 00:19:24.763220 | orchestrator | 2025-07-23 00:19:24 - disconnect routers 2025-07-23 00:19:25.403082 | orchestrator | 2025-07-23 00:19:25 - testbed 2025-07-23 00:19:26.305215 | orchestrator | 2025-07-23 00:19:26 - clean up subnets 2025-07-23 00:19:26.387557 | orchestrator | 2025-07-23 00:19:26 - subnet-testbed-management 2025-07-23 00:19:26.549855 | orchestrator | 2025-07-23 00:19:26 - clean up networks 2025-07-23 00:19:26.719895 | orchestrator | 2025-07-23 00:19:26 - net-testbed-management 2025-07-23 00:19:26.996247 | orchestrator | 2025-07-23 00:19:26 - clean up security groups 2025-07-23 00:19:27.037600 | orchestrator | 2025-07-23 00:19:27 - testbed-management 2025-07-23 00:19:27.466183 | orchestrator | 2025-07-23 00:19:27 - testbed-node 2025-07-23 00:19:27.574268 | orchestrator | 2025-07-23 00:19:27 - clean up floating ips 2025-07-23 00:19:27.606224 | orchestrator | 2025-07-23 00:19:27 - 81.163.193.166 2025-07-23 00:19:27.954341 | orchestrator | 2025-07-23 00:19:27 - clean up routers 2025-07-23 00:19:28.066326 | orchestrator | 2025-07-23 00:19:28 - testbed 2025-07-23 00:19:29.348610 | orchestrator | ok: Runtime: 0:00:19.855315 2025-07-23 00:19:29.352759 | 2025-07-23 00:19:29.352924 | PLAY RECAP 2025-07-23 00:19:29.353035 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-23 00:19:29.353089 | 2025-07-23 00:19:29.539023 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-23 00:19:29.540219 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-23 00:19:30.292401 | 2025-07-23 00:19:30.292565 | PLAY [Cleanup play] 2025-07-23 00:19:30.308740 | 2025-07-23 00:19:30.308873 | TASK [Set cloud fact (Zuul deployment)] 2025-07-23 00:19:30.368061 | orchestrator | ok 2025-07-23 00:19:30.377198 | 2025-07-23 00:19:30.377355 | TASK [Set cloud fact (local deployment)] 2025-07-23 00:19:30.413872 | orchestrator | skipping: Conditional result was False 2025-07-23 00:19:30.423977 | 2025-07-23 00:19:30.424097 | TASK [Clean the cloud environment] 2025-07-23 00:19:31.555321 | orchestrator | 2025-07-23 00:19:31 - clean up servers 2025-07-23 00:19:32.015073 | orchestrator | 2025-07-23 00:19:32 - clean up keypairs 2025-07-23 00:19:32.029161 | orchestrator | 2025-07-23 00:19:32 - wait for servers to be gone 2025-07-23 00:19:32.074147 | orchestrator | 2025-07-23 00:19:32 - clean up ports 2025-07-23 00:19:32.171015 | orchestrator | 2025-07-23 00:19:32 - clean up volumes 2025-07-23 00:19:32.248135 | orchestrator | 2025-07-23 00:19:32 - disconnect routers 2025-07-23 00:19:32.269677 | orchestrator | 2025-07-23 00:19:32 - clean up subnets 2025-07-23 00:19:32.293031 | orchestrator | 2025-07-23 00:19:32 - clean up networks 2025-07-23 00:19:32.470619 | orchestrator | 2025-07-23 00:19:32 - clean up security groups 2025-07-23 00:19:32.503505 | orchestrator | 2025-07-23 00:19:32 - clean up floating ips 2025-07-23 00:19:32.536008 | orchestrator | 2025-07-23 00:19:32 - clean up routers 2025-07-23 00:19:32.962450 | orchestrator | ok: Runtime: 0:00:01.366251 2025-07-23 00:19:32.966164 | 2025-07-23 00:19:32.966320 | PLAY RECAP 2025-07-23 00:19:32.966439 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-23 00:19:32.966499 | 2025-07-23 00:19:33.062010 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-23 00:19:33.064393 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-23 00:19:33.735292 | 2025-07-23 00:19:33.735413 | PLAY [Base post-fetch] 2025-07-23 00:19:33.748932 | 2025-07-23 00:19:33.749032 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-23 00:19:33.813654 | orchestrator | skipping: Conditional result was False 2025-07-23 00:19:33.828053 | 2025-07-23 00:19:33.828222 | TASK [fetch-output : Set log path for single node] 2025-07-23 00:19:33.875623 | orchestrator | ok 2025-07-23 00:19:33.883692 | 2025-07-23 00:19:33.883804 | LOOP [fetch-output : Ensure local output dirs] 2025-07-23 00:19:34.329198 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/work/logs" 2025-07-23 00:19:34.560765 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/work/artifacts" 2025-07-23 00:19:34.832156 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a3993629ed3b48dd8b53a7af2a9a5d47/work/docs" 2025-07-23 00:19:34.857468 | 2025-07-23 00:19:34.857634 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-23 00:19:35.779933 | orchestrator | changed: .d..t...... ./ 2025-07-23 00:19:35.780194 | orchestrator | changed: All items complete 2025-07-23 00:19:35.780231 | 2025-07-23 00:19:36.521605 | orchestrator | changed: .d..t...... ./ 2025-07-23 00:19:37.272302 | orchestrator | changed: .d..t...... ./ 2025-07-23 00:19:37.302989 | 2025-07-23 00:19:37.303153 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-23 00:19:37.335298 | orchestrator | skipping: Conditional result was False 2025-07-23 00:19:37.337878 | orchestrator | skipping: Conditional result was False 2025-07-23 00:19:37.351499 | 2025-07-23 00:19:37.351702 | PLAY RECAP 2025-07-23 00:19:37.351853 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-23 00:19:37.351966 | 2025-07-23 00:19:37.493092 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-23 00:19:37.496385 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-23 00:19:38.280238 | 2025-07-23 00:19:38.280424 | PLAY [Base post] 2025-07-23 00:19:38.295536 | 2025-07-23 00:19:38.295685 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-23 00:19:39.758084 | orchestrator | changed 2025-07-23 00:19:39.770218 | 2025-07-23 00:19:39.770369 | PLAY RECAP 2025-07-23 00:19:39.770447 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-23 00:19:39.770528 | 2025-07-23 00:19:39.892691 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-23 00:19:39.893771 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-23 00:19:40.694583 | 2025-07-23 00:19:40.694776 | PLAY [Base post-logs] 2025-07-23 00:19:40.706362 | 2025-07-23 00:19:40.706543 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-23 00:19:41.174174 | localhost | changed 2025-07-23 00:19:41.190312 | 2025-07-23 00:19:41.190515 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-23 00:19:41.230127 | localhost | ok 2025-07-23 00:19:41.237331 | 2025-07-23 00:19:41.237514 | TASK [Set zuul-log-path fact] 2025-07-23 00:19:41.268099 | localhost | ok 2025-07-23 00:19:41.289365 | 2025-07-23 00:19:41.289655 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-23 00:19:41.330446 | localhost | ok 2025-07-23 00:19:41.337686 | 2025-07-23 00:19:41.337883 | TASK [upload-logs : Create log directories] 2025-07-23 00:19:41.849724 | localhost | changed 2025-07-23 00:19:41.855014 | 2025-07-23 00:19:41.855192 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-23 00:19:42.369295 | localhost -> localhost | ok: Runtime: 0:00:00.007047 2025-07-23 00:19:42.378907 | 2025-07-23 00:19:42.379150 | TASK [upload-logs : Upload logs to log server] 2025-07-23 00:19:42.956928 | localhost | Output suppressed because no_log was given 2025-07-23 00:19:42.959724 | 2025-07-23 00:19:42.959877 | LOOP [upload-logs : Compress console log and json output] 2025-07-23 00:19:43.017100 | localhost | skipping: Conditional result was False 2025-07-23 00:19:43.022366 | localhost | skipping: Conditional result was False 2025-07-23 00:19:43.034810 | 2025-07-23 00:19:43.035110 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-23 00:19:43.084008 | localhost | skipping: Conditional result was False 2025-07-23 00:19:43.084603 | 2025-07-23 00:19:43.088128 | localhost | skipping: Conditional result was False 2025-07-23 00:19:43.097037 | 2025-07-23 00:19:43.097292 | LOOP [upload-logs : Upload console log and json output]