2025-05-31 17:26:45.441583 | Job console starting 2025-05-31 17:26:45.452793 | Updating git repos 2025-05-31 17:26:45.509652 | Cloning repos into workspace 2025-05-31 17:26:45.707746 | Restoring repo states 2025-05-31 17:26:45.724329 | Merging changes 2025-05-31 17:26:45.724356 | Checking out repos 2025-05-31 17:26:45.943604 | Preparing playbooks 2025-05-31 17:26:46.620791 | Running Ansible setup 2025-05-31 17:26:51.079814 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-31 17:26:51.871032 | 2025-05-31 17:26:51.871253 | PLAY [Base pre] 2025-05-31 17:26:51.889269 | 2025-05-31 17:26:51.889415 | TASK [Setup log path fact] 2025-05-31 17:26:51.910471 | orchestrator | ok 2025-05-31 17:26:51.928062 | 2025-05-31 17:26:51.928226 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-31 17:26:51.966159 | orchestrator | ok 2025-05-31 17:26:51.982388 | 2025-05-31 17:26:51.982514 | TASK [emit-job-header : Print job information] 2025-05-31 17:26:52.041741 | # Job Information 2025-05-31 17:26:52.042064 | Ansible Version: 2.16.14 2025-05-31 17:26:52.042124 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-31 17:26:52.042239 | Pipeline: post 2025-05-31 17:26:52.042284 | Executor: 521e9411259a 2025-05-31 17:26:52.042322 | Triggered by: https://github.com/osism/testbed/commit/cad8ddcf26903f13b5e9133b36fbaa9869237ddf 2025-05-31 17:26:52.042360 | Event ID: d41772f2-3e33-11f0-80bb-2f57e53de261 2025-05-31 17:26:52.054409 | 2025-05-31 17:26:52.054586 | LOOP [emit-job-header : Print node information] 2025-05-31 17:26:52.186533 | orchestrator | ok: 2025-05-31 17:26:52.186772 | orchestrator | # Node Information 2025-05-31 17:26:52.186808 | orchestrator | Inventory Hostname: orchestrator 2025-05-31 17:26:52.186875 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-31 17:26:52.186903 | orchestrator | Username: zuul-testbed03 2025-05-31 17:26:52.186926 | orchestrator | Distro: Debian 12.11 2025-05-31 17:26:52.186954 | orchestrator | Provider: static-testbed 2025-05-31 17:26:52.186979 | orchestrator | Region: 2025-05-31 17:26:52.187002 | orchestrator | Label: testbed-orchestrator 2025-05-31 17:26:52.187024 | orchestrator | Product Name: OpenStack Nova 2025-05-31 17:26:52.187045 | orchestrator | Interface IP: 81.163.193.140 2025-05-31 17:26:52.205916 | 2025-05-31 17:26:52.206082 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-31 17:26:52.708542 | orchestrator -> localhost | changed 2025-05-31 17:26:52.724542 | 2025-05-31 17:26:52.724727 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-31 17:26:53.868478 | orchestrator -> localhost | changed 2025-05-31 17:26:53.893605 | 2025-05-31 17:26:53.893877 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-31 17:26:54.202320 | orchestrator -> localhost | ok 2025-05-31 17:26:54.212062 | 2025-05-31 17:26:54.212247 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-31 17:26:54.244644 | orchestrator | ok 2025-05-31 17:26:54.279902 | orchestrator | included: /var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-31 17:26:54.298069 | 2025-05-31 17:26:54.298368 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-31 17:26:55.248242 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-31 17:26:55.248513 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/work/3efcbb5c3ed64942a56323660533a892_id_rsa 2025-05-31 17:26:55.248553 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/work/3efcbb5c3ed64942a56323660533a892_id_rsa.pub 2025-05-31 17:26:55.248581 | orchestrator -> localhost | The key fingerprint is: 2025-05-31 17:26:55.248606 | orchestrator -> localhost | SHA256:Yjq0HoGfROqq5wchJTTxkXBYJMRM9kgwHlLKFhUlqfw zuul-build-sshkey 2025-05-31 17:26:55.248629 | orchestrator -> localhost | The key's randomart image is: 2025-05-31 17:26:55.248669 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-31 17:26:55.248692 | orchestrator -> localhost | |@#OO=. | 2025-05-31 17:26:55.248715 | orchestrator -> localhost | |B*@oo | 2025-05-31 17:26:55.248735 | orchestrator -> localhost | |oB.o. | 2025-05-31 17:26:55.248756 | orchestrator -> localhost | |oo.+ | 2025-05-31 17:26:55.248776 | orchestrator -> localhost | | .+.+ o S | 2025-05-31 17:26:55.248803 | orchestrator -> localhost | | ..E * . | 2025-05-31 17:26:55.248824 | orchestrator -> localhost | | ..B | 2025-05-31 17:26:55.248845 | orchestrator -> localhost | | ....o | 2025-05-31 17:26:55.248866 | orchestrator -> localhost | |+o... | 2025-05-31 17:26:55.248895 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-31 17:26:55.248980 | orchestrator -> localhost | ok: Runtime: 0:00:00.415048 2025-05-31 17:26:55.257456 | 2025-05-31 17:26:55.257577 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-31 17:26:55.289069 | orchestrator | ok 2025-05-31 17:26:55.301195 | orchestrator | included: /var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-31 17:26:55.311275 | 2025-05-31 17:26:55.311415 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-31 17:26:55.336727 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:55.354333 | 2025-05-31 17:26:55.354522 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-31 17:26:55.978740 | orchestrator | changed 2025-05-31 17:26:55.988548 | 2025-05-31 17:26:55.988681 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-31 17:26:56.280422 | orchestrator | ok 2025-05-31 17:26:56.289409 | 2025-05-31 17:26:56.289550 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-31 17:26:56.733029 | orchestrator | ok 2025-05-31 17:26:56.741436 | 2025-05-31 17:26:56.741583 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-31 17:26:57.167979 | orchestrator | ok 2025-05-31 17:26:57.179333 | 2025-05-31 17:26:57.179599 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-31 17:26:57.216701 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:57.233173 | 2025-05-31 17:26:57.233340 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-31 17:26:57.706576 | orchestrator -> localhost | changed 2025-05-31 17:26:57.731977 | 2025-05-31 17:26:57.732135 | TASK [add-build-sshkey : Add back temp key] 2025-05-31 17:26:58.088334 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/work/3efcbb5c3ed64942a56323660533a892_id_rsa (zuul-build-sshkey) 2025-05-31 17:26:58.088918 | orchestrator -> localhost | ok: Runtime: 0:00:00.019641 2025-05-31 17:26:58.104375 | 2025-05-31 17:26:58.104524 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-31 17:26:58.577368 | orchestrator | ok 2025-05-31 17:26:58.586464 | 2025-05-31 17:26:58.586595 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-31 17:26:58.621572 | orchestrator | skipping: Conditional result was False 2025-05-31 17:26:58.687214 | 2025-05-31 17:26:58.687366 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-31 17:26:59.130860 | orchestrator | ok 2025-05-31 17:26:59.147195 | 2025-05-31 17:26:59.147373 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-31 17:26:59.192689 | orchestrator | ok 2025-05-31 17:26:59.202923 | 2025-05-31 17:26:59.203075 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-31 17:26:59.493756 | orchestrator -> localhost | ok 2025-05-31 17:26:59.509112 | 2025-05-31 17:26:59.509350 | TASK [validate-host : Collect information about the host] 2025-05-31 17:27:00.736893 | orchestrator | ok 2025-05-31 17:27:00.751878 | 2025-05-31 17:27:00.752007 | TASK [validate-host : Sanitize hostname] 2025-05-31 17:27:00.826441 | orchestrator | ok 2025-05-31 17:27:00.834577 | 2025-05-31 17:27:00.834721 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-31 17:27:01.428521 | orchestrator -> localhost | changed 2025-05-31 17:27:01.435529 | 2025-05-31 17:27:01.435647 | TASK [validate-host : Collect information about zuul worker] 2025-05-31 17:27:01.997904 | orchestrator | ok 2025-05-31 17:27:02.007017 | 2025-05-31 17:27:02.007252 | TASK [validate-host : Write out all zuul information for each host] 2025-05-31 17:27:02.621381 | orchestrator -> localhost | changed 2025-05-31 17:27:02.644610 | 2025-05-31 17:27:02.644805 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-31 17:27:02.948224 | orchestrator | ok 2025-05-31 17:27:02.959087 | 2025-05-31 17:27:02.959275 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-31 17:27:22.488586 | orchestrator | changed: 2025-05-31 17:27:22.488851 | orchestrator | .d..t...... src/ 2025-05-31 17:27:22.488887 | orchestrator | .d..t...... src/github.com/ 2025-05-31 17:27:22.488913 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-31 17:27:22.488935 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-31 17:27:22.488957 | orchestrator | RedHat.yml 2025-05-31 17:27:22.499925 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-31 17:27:22.499944 | orchestrator | RedHat.yml 2025-05-31 17:27:22.499997 | orchestrator | = 2.2.0"... 2025-05-31 17:27:35.826801 | orchestrator | 17:27:35.826 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-31 17:27:35.898463 | orchestrator | 17:27:35.898 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-31 17:27:37.043072 | orchestrator | 17:27:37.042 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-31 17:27:37.805237 | orchestrator | 17:27:37.805 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-31 17:27:38.745084 | orchestrator | 17:27:38.744 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-31 17:27:39.597823 | orchestrator | 17:27:39.597 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-31 17:27:40.528136 | orchestrator | 17:27:40.527 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-31 17:27:41.598073 | orchestrator | 17:27:41.597 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-31 17:27:41.598234 | orchestrator | 17:27:41.597 STDOUT terraform: Providers are signed by their developers. 2025-05-31 17:27:41.598255 | orchestrator | 17:27:41.597 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-31 17:27:41.598269 | orchestrator | 17:27:41.598 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-31 17:27:41.598281 | orchestrator | 17:27:41.598 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-31 17:27:41.598314 | orchestrator | 17:27:41.598 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-31 17:27:41.598335 | orchestrator | 17:27:41.598 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-31 17:27:41.598351 | orchestrator | 17:27:41.598 STDOUT terraform: you run "tofu init" in the future. 2025-05-31 17:27:41.599078 | orchestrator | 17:27:41.598 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-31 17:27:41.599108 | orchestrator | 17:27:41.599 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-31 17:27:41.599181 | orchestrator | 17:27:41.599 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-31 17:27:41.599197 | orchestrator | 17:27:41.599 STDOUT terraform: should now work. 2025-05-31 17:27:41.599215 | orchestrator | 17:27:41.599 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-31 17:27:41.599325 | orchestrator | 17:27:41.599 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-31 17:27:41.599346 | orchestrator | 17:27:41.599 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-31 17:27:41.827333 | orchestrator | 17:27:41.827 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-31 17:27:42.046408 | orchestrator | 17:27:42.046 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-31 17:27:42.048747 | orchestrator | 17:27:42.046 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-31 17:27:42.048812 | orchestrator | 17:27:42.046 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-31 17:27:42.048818 | orchestrator | 17:27:42.046 STDOUT terraform: for this configuration. 2025-05-31 17:27:42.266325 | orchestrator | 17:27:42.266 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-31 17:27:42.396870 | orchestrator | 17:27:42.396 STDOUT terraform: ci.auto.tfvars 2025-05-31 17:27:42.404503 | orchestrator | 17:27:42.404 STDOUT terraform: default_custom.tf 2025-05-31 17:27:42.588727 | orchestrator | 17:27:42.588 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-05-31 17:27:43.563604 | orchestrator | 17:27:43.563 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-31 17:27:44.114710 | orchestrator | 17:27:44.114 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-31 17:27:44.345704 | orchestrator | 17:27:44.345 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-31 17:27:44.345817 | orchestrator | 17:27:44.345 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-31 17:27:44.345865 | orchestrator | 17:27:44.345 STDOUT terraform:  + create 2025-05-31 17:27:44.345917 | orchestrator | 17:27:44.345 STDOUT terraform:  <= read (data resources) 2025-05-31 17:27:44.345991 | orchestrator | 17:27:44.345 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-31 17:27:44.346615 | orchestrator | 17:27:44.346 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-31 17:27:44.346695 | orchestrator | 17:27:44.346 STDOUT terraform:  # (config refers to values not yet known) 2025-05-31 17:27:44.346813 | orchestrator | 17:27:44.346 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-31 17:27:44.346892 | orchestrator | 17:27:44.346 STDOUT terraform:  + checksum = (known after apply) 2025-05-31 17:27:44.346969 | orchestrator | 17:27:44.346 STDOUT terraform:  + created_at = (known after apply) 2025-05-31 17:27:44.347042 | orchestrator | 17:27:44.346 STDOUT terraform:  + file = (known after apply) 2025-05-31 17:27:44.347106 | orchestrator | 17:27:44.347 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.347180 | orchestrator | 17:27:44.347 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.347255 | orchestrator | 17:27:44.347 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-31 17:27:44.347363 | orchestrator | 17:27:44.347 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-31 17:27:44.347406 | orchestrator | 17:27:44.347 STDOUT terraform:  + most_recent = true 2025-05-31 17:27:44.347477 | orchestrator | 17:27:44.347 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.347546 | orchestrator | 17:27:44.347 STDOUT terraform:  + protected = (known after apply) 2025-05-31 17:27:44.347615 | orchestrator | 17:27:44.347 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.347692 | orchestrator | 17:27:44.347 STDOUT terraform:  + schema = (known after apply) 2025-05-31 17:27:44.347775 | orchestrator | 17:27:44.347 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-31 17:27:44.347845 | orchestrator | 17:27:44.347 STDOUT terraform:  + tags = (known after apply) 2025-05-31 17:27:44.347917 | orchestrator | 17:27:44.347 STDOUT terraform:  + updated_at = (known after apply) 2025-05-31 17:27:44.347950 | orchestrator | 17:27:44.347 STDOUT terraform:  } 2025-05-31 17:27:44.348103 | orchestrator | 17:27:44.347 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-31 17:27:44.348176 | orchestrator | 17:27:44.348 STDOUT terraform:  # (config refers to values not yet known) 2025-05-31 17:27:44.348265 | orchestrator | 17:27:44.348 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-31 17:27:44.348324 | orchestrator | 17:27:44.348 STDOUT terraform:  + checksum = (known after apply) 2025-05-31 17:27:44.348392 | orchestrator | 17:27:44.348 STDOUT terraform:  + created_at = (known after apply) 2025-05-31 17:27:44.348462 | orchestrator | 17:27:44.348 STDOUT terraform:  + file = (known after apply) 2025-05-31 17:27:44.348532 | orchestrator | 17:27:44.348 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.348605 | orchestrator | 17:27:44.348 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.348676 | orchestrator | 17:27:44.348 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-31 17:27:44.348777 | orchestrator | 17:27:44.348 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-31 17:27:44.348827 | orchestrator | 17:27:44.348 STDOUT terraform:  + most_recent = true 2025-05-31 17:27:44.348901 | orchestrator | 17:27:44.348 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.348994 | orchestrator | 17:27:44.348 STDOUT terraform:  + protected = (known after apply) 2025-05-31 17:27:44.349066 | orchestrator | 17:27:44.348 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.349135 | orchestrator | 17:27:44.349 STDOUT terraform:  + schema = (known after apply) 2025-05-31 17:27:44.349219 | orchestrator | 17:27:44.349 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-31 17:27:44.349285 | orchestrator | 17:27:44.349 STDOUT terraform:  + tags = (known after apply) 2025-05-31 17:27:44.349348 | orchestrator | 17:27:44.349 STDOUT terraform:  + updated_at = (known after apply) 2025-05-31 17:27:44.349382 | orchestrator | 17:27:44.349 STDOUT terraform:  } 2025-05-31 17:27:44.349470 | orchestrator | 17:27:44.349 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-31 17:27:44.349553 | orchestrator | 17:27:44.349 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-31 17:27:44.349659 | orchestrator | 17:27:44.349 STDOUT terraform:  + content = (known after apply) 2025-05-31 17:27:44.349789 | orchestrator | 17:27:44.349 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 17:27:44.349876 | orchestrator | 17:27:44.349 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 17:27:44.349964 | orchestrator | 17:27:44.349 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 17:27:44.350093 | orchestrator | 17:27:44.349 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 17:27:44.350175 | orchestrator | 17:27:44.350 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 17:27:44.350263 | orchestrator | 17:27:44.350 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 17:27:44.350357 | orchestrator | 17:27:44.350 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 17:27:44.350420 | orchestrator | 17:27:44.350 STDOUT terraform:  + file_permission = "0644" 2025-05-31 17:27:44.350510 | orchestrator | 17:27:44.350 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-31 17:27:44.350600 | orchestrator | 17:27:44.350 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.350632 | orchestrator | 17:27:44.350 STDOUT terraform:  } 2025-05-31 17:27:44.350707 | orchestrator | 17:27:44.350 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-31 17:27:44.350825 | orchestrator | 17:27:44.350 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-31 17:27:44.350910 | orchestrator | 17:27:44.350 STDOUT terraform:  + content = (known after apply) 2025-05-31 17:27:44.350982 | orchestrator | 17:27:44.350 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 17:27:44.351053 | orchestrator | 17:27:44.350 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 17:27:44.351123 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 17:27:44.351196 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 17:27:44.351269 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 17:27:44.351340 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 17:27:44.351389 | orchestrator | 17:27:44.351 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 17:27:44.351438 | orchestrator | 17:27:44.351 STDOUT terraform:  + file_permission = "0644" 2025-05-31 17:27:44.351502 | orchestrator | 17:27:44.351 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-31 17:27:44.351573 | orchestrator | 17:27:44.351 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.351600 | orchestrator | 17:27:44.351 STDOUT terraform:  } 2025-05-31 17:27:44.351647 | orchestrator | 17:27:44.351 STDOUT terraform:  # local_file.inventory will be created 2025-05-31 17:27:44.351699 | orchestrator | 17:27:44.351 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-31 17:27:44.351837 | orchestrator | 17:27:44.351 STDOUT terraform:  + content = (known after apply) 2025-05-31 17:27:44.351856 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 17:27:44.351925 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 17:27:44.351997 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 17:27:44.352076 | orchestrator | 17:27:44.351 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 17:27:44.352140 | orchestrator | 17:27:44.352 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 17:27:44.352210 | orchestrator | 17:27:44.352 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 17:27:44.352257 | orchestrator | 17:27:44.352 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 17:27:44.352306 | orchestrator | 17:27:44.352 STDOUT terraform:  + file_permission = "0644" 2025-05-31 17:27:44.352366 | orchestrator | 17:27:44.352 STDOUT terraform:  + filename = "inventory.ci" 2025-05-31 17:27:44.352438 | orchestrator | 17:27:44.352 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.352464 | orchestrator | 17:27:44.352 STDOUT terraform:  } 2025-05-31 17:27:44.352524 | orchestrator | 17:27:44.352 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-31 17:27:44.352583 | orchestrator | 17:27:44.352 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-31 17:27:44.352646 | orchestrator | 17:27:44.352 STDOUT terraform:  + content = (sensitive value) 2025-05-31 17:27:44.352717 | orchestrator | 17:27:44.352 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 17:27:44.352830 | orchestrator | 17:27:44.352 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 17:27:44.352901 | orchestrator | 17:27:44.352 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 17:27:44.352979 | orchestrator | 17:27:44.352 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 17:27:44.353044 | orchestrator | 17:27:44.352 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 17:27:44.353116 | orchestrator | 17:27:44.353 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 17:27:44.354233 | orchestrator | 17:27:44.353 STDOUT terraform:  + directory_permission = "0700" 2025-05-31 17:27:44.354316 | orchestrator | 17:27:44.354 STDOUT terraform:  + file_permission = "0600" 2025-05-31 17:27:44.354387 | orchestrator | 17:27:44.354 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-31 17:27:44.354452 | orchestrator | 17:27:44.354 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.354475 | orchestrator | 17:27:44.354 STDOUT terraform:  } 2025-05-31 17:27:44.354528 | orchestrator | 17:27:44.354 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-31 17:27:44.354636 | orchestrator | 17:27:44.354 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-31 17:27:44.354674 | orchestrator | 17:27:44.354 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.354697 | orchestrator | 17:27:44.354 STDOUT terraform:  } 2025-05-31 17:27:44.354811 | orchestrator | 17:27:44.354 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-31 17:27:44.354895 | orchestrator | 17:27:44.354 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-31 17:27:44.354957 | orchestrator | 17:27:44.354 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.355000 | orchestrator | 17:27:44.354 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.355066 | orchestrator | 17:27:44.354 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.355128 | orchestrator | 17:27:44.355 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.355189 | orchestrator | 17:27:44.355 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.355268 | orchestrator | 17:27:44.355 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-31 17:27:44.355333 | orchestrator | 17:27:44.355 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.355365 | orchestrator | 17:27:44.355 STDOUT terraform:  + size = 80 2025-05-31 17:27:44.355409 | orchestrator | 17:27:44.355 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.355451 | orchestrator | 17:27:44.355 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.355474 | orchestrator | 17:27:44.355 STDOUT terraform:  } 2025-05-31 17:27:44.355557 | orchestrator | 17:27:44.355 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-31 17:27:44.355638 | orchestrator | 17:27:44.355 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 17:27:44.355700 | orchestrator | 17:27:44.355 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.355741 | orchestrator | 17:27:44.355 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.355822 | orchestrator | 17:27:44.355 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.355886 | orchestrator | 17:27:44.355 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.355946 | orchestrator | 17:27:44.355 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.356024 | orchestrator | 17:27:44.355 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-31 17:27:44.356084 | orchestrator | 17:27:44.356 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.356144 | orchestrator | 17:27:44.356 STDOUT terraform:  + size = 80 2025-05-31 17:27:44.356185 | orchestrator | 17:27:44.356 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.356227 | orchestrator | 17:27:44.356 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.356249 | orchestrator | 17:27:44.356 STDOUT terraform:  } 2025-05-31 17:27:44.356397 | orchestrator | 17:27:44.356 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-31 17:27:44.356478 | orchestrator | 17:27:44.356 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 17:27:44.356543 | orchestrator | 17:27:44.356 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.356579 | orchestrator | 17:27:44.356 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.356644 | orchestrator | 17:27:44.356 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.356704 | orchestrator | 17:27:44.356 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.356787 | orchestrator | 17:27:44.356 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.356864 | orchestrator | 17:27:44.356 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-31 17:27:44.356925 | orchestrator | 17:27:44.356 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.356965 | orchestrator | 17:27:44.356 STDOUT terraform:  + size = 80 2025-05-31 17:27:44.357003 | orchestrator | 17:27:44.356 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.357044 | orchestrator | 17:27:44.357 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.357065 | orchestrator | 17:27:44.357 STDOUT terraform:  } 2025-05-31 17:27:44.357146 | orchestrator | 17:27:44.357 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-31 17:27:44.357224 | orchestrator | 17:27:44.357 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 17:27:44.357285 | orchestrator | 17:27:44.357 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.357325 | orchestrator | 17:27:44.357 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.357390 | orchestrator | 17:27:44.357 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.357451 | orchestrator | 17:27:44.357 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.357531 | orchestrator | 17:27:44.357 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.357609 | orchestrator | 17:27:44.357 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-31 17:27:44.357669 | orchestrator | 17:27:44.357 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.357704 | orchestrator | 17:27:44.357 STDOUT terraform:  + size = 80 2025-05-31 17:27:44.357759 | orchestrator | 17:27:44.357 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.357827 | orchestrator | 17:27:44.357 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.357850 | orchestrator | 17:27:44.357 STDOUT terraform:  } 2025-05-31 17:27:44.357930 | orchestrator | 17:27:44.357 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-31 17:27:44.358008 | orchestrator | 17:27:44.357 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 17:27:44.358651 | orchestrator | 17:27:44.358 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.358695 | orchestrator | 17:27:44.358 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.358825 | orchestrator | 17:27:44.358 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.358888 | orchestrator | 17:27:44.358 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.358949 | orchestrator | 17:27:44.358 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.359025 | orchestrator | 17:27:44.358 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-31 17:27:44.359082 | orchestrator | 17:27:44.359 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.359115 | orchestrator | 17:27:44.359 STDOUT terraform:  + size = 80 2025-05-31 17:27:44.359153 | orchestrator | 17:27:44.359 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.359191 | orchestrator | 17:27:44.359 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.359211 | orchestrator | 17:27:44.359 STDOUT terraform:  } 2025-05-31 17:27:44.359286 | orchestrator | 17:27:44.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-31 17:27:44.359359 | orchestrator | 17:27:44.359 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 17:27:44.359416 | orchestrator | 17:27:44.359 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.359454 | orchestrator | 17:27:44.359 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.359512 | orchestrator | 17:27:44.359 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.359570 | orchestrator | 17:27:44.359 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.359627 | orchestrator | 17:27:44.359 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.359700 | orchestrator | 17:27:44.359 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-31 17:27:44.359774 | orchestrator | 17:27:44.359 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.359806 | orchestrator | 17:27:44.359 STDOUT terraform:  + size = 80 2025-05-31 17:27:44.359848 | orchestrator | 17:27:44.359 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.359883 | orchestrator | 17:27:44.359 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.359905 | orchestrator | 17:27:44.359 STDOUT terraform:  } 2025-05-31 17:27:44.359981 | orchestrator | 17:27:44.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-31 17:27:44.360054 | orchestrator | 17:27:44.359 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 17:27:44.360111 | orchestrator | 17:27:44.360 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.360149 | orchestrator | 17:27:44.360 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.360207 | orchestrator | 17:27:44.360 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.360267 | orchestrator | 17:27:44.360 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.360324 | orchestrator | 17:27:44.360 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.360393 | orchestrator | 17:27:44.360 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-31 17:27:44.360450 | orchestrator | 17:27:44.360 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.360483 | orchestrator | 17:27:44.360 STDOUT terraform:  + size = 80 2025-05-31 17:27:44.360521 | orchestrator | 17:27:44.360 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.360559 | orchestrator | 17:27:44.360 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.360579 | orchestrator | 17:27:44.360 STDOUT terraform:  } 2025-05-31 17:27:44.360650 | orchestrator | 17:27:44.360 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-31 17:27:44.360719 | orchestrator | 17:27:44.360 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.360793 | orchestrator | 17:27:44.360 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.360830 | orchestrator | 17:27:44.360 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.360888 | orchestrator | 17:27:44.360 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.360944 | orchestrator | 17:27:44.360 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.361007 | orchestrator | 17:27:44.360 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-31 17:27:44.361065 | orchestrator | 17:27:44.361 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.361108 | orchestrator | 17:27:44.361 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.361147 | orchestrator | 17:27:44.361 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.361196 | orchestrator | 17:27:44.361 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.361423 | orchestrator | 17:27:44.361 STDOUT terraform:  } 2025-05-31 17:27:44.362515 | orchestrator | 17:27:44.361 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-31 17:27:44.362584 | orchestrator | 17:27:44.362 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.362640 | orchestrator | 17:27:44.362 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.362680 | orchestrator | 17:27:44.362 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.362739 | orchestrator | 17:27:44.362 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.362837 | orchestrator | 17:27:44.362 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.362901 | orchestrator | 17:27:44.362 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-31 17:27:44.362960 | orchestrator | 17:27:44.362 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.362991 | orchestrator | 17:27:44.362 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.363029 | orchestrator | 17:27:44.362 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.363064 | orchestrator | 17:27:44.363 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.363093 | orchestrator | 17:27:44.363 STDOUT terraform:  } 2025-05-31 17:27:44.363150 | orchestrator | 17:27:44.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-31 17:27:44.363212 | orchestrator | 17:27:44.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.363266 | orchestrator | 17:27:44.363 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.363296 | orchestrator | 17:27:44.363 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.363350 | orchestrator | 17:27:44.363 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.363395 | orchestrator | 17:27:44.363 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.363453 | orchestrator | 17:27:44.363 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-31 17:27:44.363505 | orchestrator | 17:27:44.363 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.363534 | orchestrator | 17:27:44.363 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.363576 | orchestrator | 17:27:44.363 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.363602 | orchestrator | 17:27:44.363 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.363622 | orchestrator | 17:27:44.363 STDOUT terraform:  } 2025-05-31 17:27:44.363684 | orchestrator | 17:27:44.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-31 17:27:44.363769 | orchestrator | 17:27:44.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.363812 | orchestrator | 17:27:44.363 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.363895 | orchestrator | 17:27:44.363 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.363956 | orchestrator | 17:27:44.363 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.364013 | orchestrator | 17:27:44.363 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.364061 | orchestrator | 17:27:44.364 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-31 17:27:44.364112 | orchestrator | 17:27:44.364 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.364142 | orchestrator | 17:27:44.364 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.364182 | orchestrator | 17:27:44.364 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.364208 | orchestrator | 17:27:44.364 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.364227 | orchestrator | 17:27:44.364 STDOUT terraform:  } 2025-05-31 17:27:44.364290 | orchestrator | 17:27:44.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-31 17:27:44.364350 | orchestrator | 17:27:44.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.364409 | orchestrator | 17:27:44.364 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.364437 | orchestrator | 17:27:44.364 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.364493 | orchestrator | 17:27:44.364 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.364543 | orchestrator | 17:27:44.364 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.364597 | orchestrator | 17:27:44.364 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-31 17:27:44.364647 | orchestrator | 17:27:44.364 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.364677 | orchestrator | 17:27:44.364 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.364721 | orchestrator | 17:27:44.364 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.364744 | orchestrator | 17:27:44.364 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.364811 | orchestrator | 17:27:44.364 STDOUT terraform:  } 2025-05-31 17:27:44.364839 | orchestrator | 17:27:44.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-31 17:27:44.364908 | orchestrator | 17:27:44.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.364955 | orchestrator | 17:27:44.364 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.364990 | orchestrator | 17:27:44.364 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.365041 | orchestrator | 17:27:44.364 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.365115 | orchestrator | 17:27:44.365 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.365170 | orchestrator | 17:27:44.365 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-31 17:27:44.365223 | orchestrator | 17:27:44.365 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.365249 | orchestrator | 17:27:44.365 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.365290 | orchestrator | 17:27:44.365 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.365317 | orchestrator | 17:27:44.365 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.365335 | orchestrator | 17:27:44.365 STDOUT terraform:  } 2025-05-31 17:27:44.365401 | orchestrator | 17:27:44.365 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-31 17:27:44.365462 | orchestrator | 17:27:44.365 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.365513 | orchestrator | 17:27:44.365 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.365547 | orchestrator | 17:27:44.365 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.365598 | orchestrator | 17:27:44.365 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.365649 | orchestrator | 17:27:44.365 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.365703 | orchestrator | 17:27:44.365 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-31 17:27:44.365823 | orchestrator | 17:27:44.365 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.365839 | orchestrator | 17:27:44.365 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.365853 | orchestrator | 17:27:44.365 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.365873 | orchestrator | 17:27:44.365 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.365894 | orchestrator | 17:27:44.365 STDOUT terraform:  } 2025-05-31 17:27:44.365958 | orchestrator | 17:27:44.365 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-31 17:27:44.366033 | orchestrator | 17:27:44.365 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.366085 | orchestrator | 17:27:44.366 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.366120 | orchestrator | 17:27:44.366 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.366171 | orchestrator | 17:27:44.366 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.366222 | orchestrator | 17:27:44.366 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.366279 | orchestrator | 17:27:44.366 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-31 17:27:44.366329 | orchestrator | 17:27:44.366 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.366359 | orchestrator | 17:27:44.366 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.366392 | orchestrator | 17:27:44.366 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.366424 | orchestrator | 17:27:44.366 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.366443 | orchestrator | 17:27:44.366 STDOUT terraform:  } 2025-05-31 17:27:44.366506 | orchestrator | 17:27:44.366 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-31 17:27:44.366564 | orchestrator | 17:27:44.366 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 17:27:44.366614 | orchestrator | 17:27:44.366 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 17:27:44.366648 | orchestrator | 17:27:44.366 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.366698 | orchestrator | 17:27:44.366 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.366832 | orchestrator | 17:27:44.366 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 17:27:44.366850 | orchestrator | 17:27:44.366 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-31 17:27:44.366885 | orchestrator | 17:27:44.366 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.366917 | orchestrator | 17:27:44.366 STDOUT terraform:  + size = 20 2025-05-31 17:27:44.366961 | orchestrator | 17:27:44.366 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 17:27:44.366988 | orchestrator | 17:27:44.366 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 17:27:44.367007 | orchestrator | 17:27:44.366 STDOUT terraform:  } 2025-05-31 17:27:44.367068 | orchestrator | 17:27:44.367 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-31 17:27:44.367128 | orchestrator | 17:27:44.367 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-31 17:27:44.367177 | orchestrator | 17:27:44.367 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 17:27:44.367227 | orchestrator | 17:27:44.367 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 17:27:44.367275 | orchestrator | 17:27:44.367 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 17:27:44.367323 | orchestrator | 17:27:44.367 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.367354 | orchestrator | 17:27:44.367 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.367382 | orchestrator | 17:27:44.367 STDOUT terraform:  + config_drive = true 2025-05-31 17:27:44.367427 | orchestrator | 17:27:44.367 STDOUT terraform:  + created = (known after apply) 2025-05-31 17:27:44.367471 | orchestrator | 17:27:44.367 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 17:27:44.367508 | orchestrator | 17:27:44.367 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-31 17:27:44.367538 | orchestrator | 17:27:44.367 STDOUT terraform:  + force_delete = false 2025-05-31 17:27:44.367581 | orchestrator | 17:27:44.367 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 17:27:44.367626 | orchestrator | 17:27:44.367 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.367677 | orchestrator | 17:27:44.367 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.367716 | orchestrator | 17:27:44.367 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 17:27:44.367766 | orchestrator | 17:27:44.367 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 17:27:44.367801 | orchestrator | 17:27:44.367 STDOUT terraform:  + name = "testbed-manager" 2025-05-31 17:27:44.367832 | orchestrator | 17:27:44.367 STDOUT terraform:  + power_state = "active" 2025-05-31 17:27:44.367876 | orchestrator | 17:27:44.367 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.367920 | orchestrator | 17:27:44.367 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 17:27:44.367949 | orchestrator | 17:27:44.367 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 17:27:44.367994 | orchestrator | 17:27:44.367 STDOUT terraform:  + updated = (known after apply) 2025-05-31 17:27:44.368038 | orchestrator | 17:27:44.367 STDOUT terraform:  + user_data = (known after apply) 2025-05-31 17:27:44.368060 | orchestrator | 17:27:44.368 STDOUT terraform:  + block_device { 2025-05-31 17:27:44.368090 | orchestrator | 17:27:44.368 STDOUT terraform:  + boot_index = 0 2025-05-31 17:27:44.368126 | orchestrator | 17:27:44.368 STDOUT terraform:  + delete_on_termination = false 2025-05-31 17:27:44.368165 | orchestrator | 17:27:44.368 STDOUT terraform:  + destination_type = "volume" 2025-05-31 17:27:44.368200 | orchestrator | 17:27:44.368 STDOUT terraform:  + multiattach = false 2025-05-31 17:27:44.368238 | orchestrator | 17:27:44.368 STDOUT terraform:  + source_type = "volume" 2025-05-31 17:27:44.368298 | orchestrator | 17:27:44.368 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.368304 | orchestrator | 17:27:44.368 STDOUT terraform:  } 2025-05-31 17:27:44.368319 | orchestrator | 17:27:44.368 STDOUT terraform:  + network { 2025-05-31 17:27:44.368344 | orchestrator | 17:27:44.368 STDOUT terraform:  + access_network = false 2025-05-31 17:27:44.368383 | orchestrator | 17:27:44.368 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 17:27:44.368422 | orchestrator | 17:27:44.368 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 17:27:44.368462 | orchestrator | 17:27:44.368 STDOUT terraform:  + mac = (known after apply) 2025-05-31 17:27:44.368501 | orchestrator | 17:27:44.368 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.368542 | orchestrator | 17:27:44.368 STDOUT terraform:  + port = (known after apply) 2025-05-31 17:27:44.368581 | orchestrator | 17:27:44.368 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.368595 | orchestrator | 17:27:44.368 STDOUT terraform:  } 2025-05-31 17:27:44.368611 | orchestrator | 17:27:44.368 STDOUT terraform:  } 2025-05-31 17:27:44.368665 | orchestrator | 17:27:44.368 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-31 17:27:44.368722 | orchestrator | 17:27:44.368 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 17:27:44.368781 | orchestrator | 17:27:44.368 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 17:27:44.368823 | orchestrator | 17:27:44.368 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 17:27:44.368867 | orchestrator | 17:27:44.368 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 17:27:44.368911 | orchestrator | 17:27:44.368 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.368942 | orchestrator | 17:27:44.368 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.368967 | orchestrator | 17:27:44.368 STDOUT terraform:  + config_drive = true 2025-05-31 17:27:44.369011 | orchestrator | 17:27:44.368 STDOUT terraform:  + created = (known after apply) 2025-05-31 17:27:44.369054 | orchestrator | 17:27:44.369 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 17:27:44.369091 | orchestrator | 17:27:44.369 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 17:27:44.369121 | orchestrator | 17:27:44.369 STDOUT terraform:  + force_delete = false 2025-05-31 17:27:44.369164 | orchestrator | 17:27:44.369 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 17:27:44.369209 | orchestrator | 17:27:44.369 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.369252 | orchestrator | 17:27:44.369 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.369297 | orchestrator | 17:27:44.369 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 17:27:44.369327 | orchestrator | 17:27:44.369 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 17:27:44.369368 | orchestrator | 17:27:44.369 STDOUT terraform:  + name = "testbed-node-0" 2025-05-31 17:27:44.369398 | orchestrator | 17:27:44.369 STDOUT terraform:  + power_state = "active" 2025-05-31 17:27:44.369448 | orchestrator | 17:27:44.369 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.369484 | orchestrator | 17:27:44.369 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 17:27:44.369515 | orchestrator | 17:27:44.369 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 17:27:44.369558 | orchestrator | 17:27:44.369 STDOUT terraform:  + updated = (known after apply) 2025-05-31 17:27:44.369620 | orchestrator | 17:27:44.369 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 17:27:44.369640 | orchestrator | 17:27:44.369 STDOUT terraform:  + block_device { 2025-05-31 17:27:44.369670 | orchestrator | 17:27:44.369 STDOUT terraform:  + boot_index = 0 2025-05-31 17:27:44.369705 | orchestrator | 17:27:44.369 STDOUT terraform:  + delete_on_termination = false 2025-05-31 17:27:44.369742 | orchestrator | 17:27:44.369 STDOUT terraform:  + destination_type = "volume" 2025-05-31 17:27:44.369798 | orchestrator | 17:27:44.369 STDOUT terraform:  + multiattach = false 2025-05-31 17:27:44.369836 | orchestrator | 17:27:44.369 STDOUT terraform:  + source_type = "volume" 2025-05-31 17:27:44.369886 | orchestrator | 17:27:44.369 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.369903 | orchestrator | 17:27:44.369 STDOUT terraform:  } 2025-05-31 17:27:44.369922 | orchestrator | 17:27:44.369 STDOUT terraform:  + network { 2025-05-31 17:27:44.369948 | orchestrator | 17:27:44.369 STDOUT terraform:  + access_network = false 2025-05-31 17:27:44.369987 | orchestrator | 17:27:44.369 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 17:27:44.370080 | orchestrator | 17:27:44.369 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 17:27:44.370093 | orchestrator | 17:27:44.370 STDOUT terraform:  + mac = (known after apply) 2025-05-31 17:27:44.370131 | orchestrator | 17:27:44.370 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.370172 | orchestrator | 17:27:44.370 STDOUT terraform:  + port = (known after apply) 2025-05-31 17:27:44.370213 | orchestrator | 17:27:44.370 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.370228 | orchestrator | 17:27:44.370 STDOUT terraform:  } 2025-05-31 17:27:44.370246 | orchestrator | 17:27:44.370 STDOUT terraform:  } 2025-05-31 17:27:44.370300 | orchestrator | 17:27:44.370 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-31 17:27:44.370353 | orchestrator | 17:27:44.370 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 17:27:44.370397 | orchestrator | 17:27:44.370 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 17:27:44.370439 | orchestrator | 17:27:44.370 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 17:27:44.370482 | orchestrator | 17:27:44.370 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 17:27:44.370527 | orchestrator | 17:27:44.370 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.370557 | orchestrator | 17:27:44.370 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.370584 | orchestrator | 17:27:44.370 STDOUT terraform:  + config_drive = true 2025-05-31 17:27:44.370630 | orchestrator | 17:27:44.370 STDOUT terraform:  + created = (known after apply) 2025-05-31 17:27:44.370673 | orchestrator | 17:27:44.370 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 17:27:44.370709 | orchestrator | 17:27:44.370 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 17:27:44.370739 | orchestrator | 17:27:44.370 STDOUT terraform:  + force_delete = false 2025-05-31 17:27:44.370810 | orchestrator | 17:27:44.370 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 17:27:44.370855 | orchestrator | 17:27:44.370 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.370899 | orchestrator | 17:27:44.370 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.370943 | orchestrator | 17:27:44.370 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 17:27:44.370974 | orchestrator | 17:27:44.370 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 17:27:44.371014 | orchestrator | 17:27:44.370 STDOUT terraform:  + name = "testbed-node-1" 2025-05-31 17:27:44.371044 | orchestrator | 17:27:44.371 STDOUT terraform:  + power_state = "active" 2025-05-31 17:27:44.371089 | orchestrator | 17:27:44.371 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.371132 | orchestrator | 17:27:44.371 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 17:27:44.371161 | orchestrator | 17:27:44.371 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 17:27:44.371205 | orchestrator | 17:27:44.371 STDOUT terraform:  + updated = (known after apply) 2025-05-31 17:27:44.371267 | orchestrator | 17:27:44.371 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 17:27:44.371288 | orchestrator | 17:27:44.371 STDOUT terraform:  + block_device { 2025-05-31 17:27:44.371317 | orchestrator | 17:27:44.371 STDOUT terraform:  + boot_index = 0 2025-05-31 17:27:44.371351 | orchestrator | 17:27:44.371 STDOUT terraform:  + delete_on_termination = false 2025-05-31 17:27:44.371388 | orchestrator | 17:27:44.371 STDOUT terraform:  + destination_type = "volume" 2025-05-31 17:27:44.371421 | orchestrator | 17:27:44.371 STDOUT terraform:  + multiattach = false 2025-05-31 17:27:44.371461 | orchestrator | 17:27:44.371 STDOUT terraform:  + source_type = "volume" 2025-05-31 17:27:44.371502 | orchestrator | 17:27:44.371 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.371518 | orchestrator | 17:27:44.371 STDOUT terraform:  } 2025-05-31 17:27:44.371535 | orchestrator | 17:27:44.371 STDOUT terraform:  + network { 2025-05-31 17:27:44.371559 | orchestrator | 17:27:44.371 STDOUT terraform:  + access_network = false 2025-05-31 17:27:44.371596 | orchestrator | 17:27:44.371 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 17:27:44.371634 | orchestrator | 17:27:44.371 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 17:27:44.371689 | orchestrator | 17:27:44.371 STDOUT terraform:  + mac = (known after apply) 2025-05-31 17:27:44.371740 | orchestrator | 17:27:44.371 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.371783 | orchestrator | 17:27:44.371 STDOUT terraform:  + port = (known after apply) 2025-05-31 17:27:44.371820 | orchestrator | 17:27:44.371 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.371838 | orchestrator | 17:27:44.371 STDOUT terraform:  } 2025-05-31 17:27:44.371854 | orchestrator | 17:27:44.371 STDOUT terraform:  } 2025-05-31 17:27:44.371905 | orchestrator | 17:27:44.371 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-31 17:27:44.371955 | orchestrator | 17:27:44.371 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 17:27:44.371996 | orchestrator | 17:27:44.371 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 17:27:44.372037 | orchestrator | 17:27:44.371 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 17:27:44.372078 | orchestrator | 17:27:44.372 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 17:27:44.372119 | orchestrator | 17:27:44.372 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.372148 | orchestrator | 17:27:44.372 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.372172 | orchestrator | 17:27:44.372 STDOUT terraform:  + config_drive = true 2025-05-31 17:27:44.372214 | orchestrator | 17:27:44.372 STDOUT terraform:  + created = (known after apply) 2025-05-31 17:27:44.372255 | orchestrator | 17:27:44.372 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 17:27:44.372289 | orchestrator | 17:27:44.372 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 17:27:44.372317 | orchestrator | 17:27:44.372 STDOUT terraform:  + force_delete = false 2025-05-31 17:27:44.372358 | orchestrator | 17:27:44.372 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 17:27:44.372401 | orchestrator | 17:27:44.372 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.372442 | orchestrator | 17:27:44.372 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.372484 | orchestrator | 17:27:44.372 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 17:27:44.372512 | orchestrator | 17:27:44.372 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 17:27:44.372553 | orchestrator | 17:27:44.372 STDOUT terraform:  + name = "testbed-node-2" 2025-05-31 17:27:44.372582 | orchestrator | 17:27:44.372 STDOUT terraform:  + power_state = "active" 2025-05-31 17:27:44.372624 | orchestrator | 17:27:44.372 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.372665 | orchestrator | 17:27:44.372 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 17:27:44.372693 | orchestrator | 17:27:44.372 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 17:27:44.372734 | orchestrator | 17:27:44.372 STDOUT terraform:  + updated = (known after apply) 2025-05-31 17:27:44.372806 | orchestrator | 17:27:44.372 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 17:27:44.372824 | orchestrator | 17:27:44.372 STDOUT terraform:  + block_device { 2025-05-31 17:27:44.372853 | orchestrator | 17:27:44.372 STDOUT terraform:  + boot_index = 0 2025-05-31 17:27:44.372886 | orchestrator | 17:27:44.372 STDOUT terraform:  + delete_on_termination = false 2025-05-31 17:27:44.372922 | orchestrator | 17:27:44.372 STDOUT terraform:  + destination_type = "volume" 2025-05-31 17:27:44.372956 | orchestrator | 17:27:44.372 STDOUT terraform:  + multiattach = false 2025-05-31 17:27:44.372991 | orchestrator | 17:27:44.372 STDOUT terraform:  + source_type = "volume" 2025-05-31 17:27:44.373039 | orchestrator | 17:27:44.372 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.373049 | orchestrator | 17:27:44.373 STDOUT terraform:  } 2025-05-31 17:27:44.373066 | orchestrator | 17:27:44.373 STDOUT terraform:  + network { 2025-05-31 17:27:44.373099 | orchestrator | 17:27:44.373 STDOUT terraform:  + access_network = false 2025-05-31 17:27:44.373135 | orchestrator | 17:27:44.373 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 17:27:44.373172 | orchestrator | 17:27:44.373 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 17:27:44.373210 | orchestrator | 17:27:44.373 STDOUT terraform:  + mac = (known after apply) 2025-05-31 17:27:44.373246 | orchestrator | 17:27:44.373 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.373282 | orchestrator | 17:27:44.373 STDOUT terraform:  + port = (known after apply) 2025-05-31 17:27:44.373319 | orchestrator | 17:27:44.373 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.373335 | orchestrator | 17:27:44.373 STDOUT terraform:  } 2025-05-31 17:27:44.373350 | orchestrator | 17:27:44.373 STDOUT terraform:  } 2025-05-31 17:27:44.373402 | orchestrator | 17:27:44.373 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-31 17:27:44.373453 | orchestrator | 17:27:44.373 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 17:27:44.373493 | orchestrator | 17:27:44.373 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 17:27:44.373533 | orchestrator | 17:27:44.373 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 17:27:44.373579 | orchestrator | 17:27:44.373 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 17:27:44.373617 | orchestrator | 17:27:44.373 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.373645 | orchestrator | 17:27:44.373 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.373670 | orchestrator | 17:27:44.373 STDOUT terraform:  + config_drive = true 2025-05-31 17:27:44.373711 | orchestrator | 17:27:44.373 STDOUT terraform:  + created = (known after apply) 2025-05-31 17:27:44.373793 | orchestrator | 17:27:44.373 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 17:27:44.373811 | orchestrator | 17:27:44.373 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 17:27:44.373840 | orchestrator | 17:27:44.373 STDOUT terraform:  + force_delete = false 2025-05-31 17:27:44.373880 | orchestrator | 17:27:44.373 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 17:27:44.373922 | orchestrator | 17:27:44.373 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.373963 | orchestrator | 17:27:44.373 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.374006 | orchestrator | 17:27:44.373 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 17:27:44.376844 | orchestrator | 17:27:44.374 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 17:27:44.376869 | orchestrator | 17:27:44.374 STDOUT terraform:  + name = "testbed-node-3" 2025-05-31 17:27:44.376873 | orchestrator | 17:27:44.374 STDOUT terraform:  + power_state = "active" 2025-05-31 17:27:44.376878 | orchestrator | 17:27:44.374 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.376882 | orchestrator | 17:27:44.374 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 17:27:44.376886 | orchestrator | 17:27:44.374 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 17:27:44.376900 | orchestrator | 17:27:44.374 STDOUT terraform:  + updated = (known after apply) 2025-05-31 17:27:44.376904 | orchestrator | 17:27:44.374 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 17:27:44.376909 | orchestrator | 17:27:44.374 STDOUT terraform:  + block_device { 2025-05-31 17:27:44.376912 | orchestrator | 17:27:44.374 STDOUT terraform:  + boot_index = 0 2025-05-31 17:27:44.376916 | orchestrator | 17:27:44.374 STDOUT terraform:  + delete_on_termination = false 2025-05-31 17:27:44.376920 | orchestrator | 17:27:44.374 STDOUT terraform:  + destination_type = "volume" 2025-05-31 17:27:44.376923 | orchestrator | 17:27:44.374 STDOUT terraform:  + multiattach = false 2025-05-31 17:27:44.376927 | orchestrator | 17:27:44.374 STDOUT terraform:  + source_type = "volume" 2025-05-31 17:27:44.376931 | orchestrator | 17:27:44.374 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.376935 | orchestrator | 17:27:44.374 STDOUT terraform:  } 2025-05-31 17:27:44.376938 | orchestrator | 17:27:44.374 STDOUT terraform:  + network { 2025-05-31 17:27:44.376942 | orchestrator | 17:27:44.374 STDOUT terraform:  + access_network = false 2025-05-31 17:27:44.376946 | orchestrator | 17:27:44.374 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 17:27:44.376949 | orchestrator | 17:27:44.374 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 17:27:44.376953 | orchestrator | 17:27:44.374 STDOUT terraform:  + mac = (known after apply) 2025-05-31 17:27:44.376957 | orchestrator | 17:27:44.374 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.376960 | orchestrator | 17:27:44.374 STDOUT terraform:  + port = (known after apply) 2025-05-31 17:27:44.376964 | orchestrator | 17:27:44.374 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.376968 | orchestrator | 17:27:44.374 STDOUT terraform:  } 2025-05-31 17:27:44.376972 | orchestrator | 17:27:44.374 STDOUT terraform:  } 2025-05-31 17:27:44.376975 | orchestrator | 17:27:44.374 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-31 17:27:44.376979 | orchestrator | 17:27:44.374 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 17:27:44.376983 | orchestrator | 17:27:44.374 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 17:27:44.376987 | orchestrator | 17:27:44.374 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 17:27:44.376990 | orchestrator | 17:27:44.374 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 17:27:44.376994 | orchestrator | 17:27:44.374 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.376997 | orchestrator | 17:27:44.374 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.377001 | orchestrator | 17:27:44.374 STDOUT terraform:  + config_drive = true 2025-05-31 17:27:44.377005 | orchestrator | 17:27:44.374 STDOUT terraform:  + created = (known after apply) 2025-05-31 17:27:44.377008 | orchestrator | 17:27:44.375 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 17:27:44.377014 | orchestrator | 17:27:44.375 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 17:27:44.377018 | orchestrator | 17:27:44.375 STDOUT terraform:  + force_delete = false 2025-05-31 17:27:44.377028 | orchestrator | 17:27:44.375 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 17:27:44.377032 | orchestrator | 17:27:44.375 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.377036 | orchestrator | 17:27:44.375 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.377040 | orchestrator | 17:27:44.375 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 17:27:44.377043 | orchestrator | 17:27:44.375 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 17:27:44.377047 | orchestrator | 17:27:44.375 STDOUT terraform:  + name = "testbed-node-4" 2025-05-31 17:27:44.377051 | orchestrator | 17:27:44.375 STDOUT terraform:  + power_state = "active" 2025-05-31 17:27:44.377055 | orchestrator | 17:27:44.375 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.377058 | orchestrator | 17:27:44.375 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 17:27:44.377062 | orchestrator | 17:27:44.375 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 17:27:44.377066 | orchestrator | 17:27:44.375 STDOUT terraform:  + updated = (known after apply) 2025-05-31 17:27:44.377070 | orchestrator | 17:27:44.375 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 17:27:44.377074 | orchestrator | 17:27:44.375 STDOUT terraform:  + block_device { 2025-05-31 17:27:44.377078 | orchestrator | 17:27:44.375 STDOUT terraform:  + boot_index = 0 2025-05-31 17:27:44.377081 | orchestrator | 17:27:44.375 STDOUT terraform:  + delete_on_termination = false 2025-05-31 17:27:44.377085 | orchestrator | 17:27:44.375 STDOUT terraform:  + destination_type = "volume" 2025-05-31 17:27:44.377091 | orchestrator | 17:27:44.375 STDOUT terraform:  + multiattach = false 2025-05-31 17:27:44.377095 | orchestrator | 17:27:44.375 STDOUT terraform:  + source_type = "volume" 2025-05-31 17:27:44.377098 | orchestrator | 17:27:44.375 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.377102 | orchestrator | 17:27:44.375 STDOUT terraform:  } 2025-05-31 17:27:44.377106 | orchestrator | 17:27:44.375 STDOUT terraform:  + network { 2025-05-31 17:27:44.377110 | orchestrator | 17:27:44.375 STDOUT terraform:  + access_network = false 2025-05-31 17:27:44.377113 | orchestrator | 17:27:44.375 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 17:27:44.377117 | orchestrator | 17:27:44.375 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 17:27:44.377121 | orchestrator | 17:27:44.375 STDOUT terraform:  + mac = (known after apply) 2025-05-31 17:27:44.377124 | orchestrator | 17:27:44.375 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.377128 | orchestrator | 17:27:44.375 STDOUT terraform:  + port = (known after apply) 2025-05-31 17:27:44.377132 | orchestrator | 17:27:44.375 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.377137 | orchestrator | 17:27:44.375 STDOUT terraform:  } 2025-05-31 17:27:44.377141 | orchestrator | 17:27:44.375 STDOUT terraform:  } 2025-05-31 17:27:44.377145 | orchestrator | 17:27:44.375 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-31 17:27:44.377149 | orchestrator | 17:27:44.375 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 17:27:44.377152 | orchestrator | 17:27:44.375 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 17:27:44.377156 | orchestrator | 17:27:44.376 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 17:27:44.377160 | orchestrator | 17:27:44.376 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 17:27:44.377164 | orchestrator | 17:27:44.376 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.377167 | orchestrator | 17:27:44.376 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 17:27:44.377171 | orchestrator | 17:27:44.376 STDOUT terraform:  + config_drive = true 2025-05-31 17:27:44.377181 | orchestrator | 17:27:44.376 STDOUT terraform:  + created = (known after apply) 2025-05-31 17:27:44.377185 | orchestrator | 17:27:44.376 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 17:27:44.377189 | orchestrator | 17:27:44.376 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 17:27:44.377193 | orchestrator | 17:27:44.376 STDOUT terraform:  + force_delete = false 2025-05-31 17:27:44.377196 | orchestrator | 17:27:44.376 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 17:27:44.377200 | orchestrator | 17:27:44.376 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.377205 | orchestrator | 17:27:44.376 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 17:27:44.377209 | orchestrator | 17:27:44.376 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 17:27:44.377213 | orchestrator | 17:27:44.376 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 17:27:44.377217 | orchestrator | 17:27:44.376 STDOUT terraform:  + name = "testbed-node-5" 2025-05-31 17:27:44.377220 | orchestrator | 17:27:44.376 STDOUT terraform:  + power_state = "active" 2025-05-31 17:27:44.377224 | orchestrator | 17:27:44.376 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.377228 | orchestrator | 17:27:44.376 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 17:27:44.377231 | orchestrator | 17:27:44.376 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 17:27:44.377235 | orchestrator | 17:27:44.376 STDOUT terraform:  + updated = (known after apply) 2025-05-31 17:27:44.377239 | orchestrator | 17:27:44.376 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 17:27:44.377243 | orchestrator | 17:27:44.376 STDOUT terraform:  + block_device { 2025-05-31 17:27:44.377246 | orchestrator | 17:27:44.376 STDOUT terraform:  + boot_index = 0 2025-05-31 17:27:44.377250 | orchestrator | 17:27:44.376 STDOUT terraform:  + delete_on_termination = false 2025-05-31 17:27:44.377254 | orchestrator | 17:27:44.376 STDOUT terraform:  + destination_type = "volume" 2025-05-31 17:27:44.377260 | orchestrator | 17:27:44.376 STDOUT terraform:  + multiattach = false 2025-05-31 17:27:44.377263 | orchestrator | 17:27:44.376 STDOUT terraform:  + source_type = "volume" 2025-05-31 17:27:44.377267 | orchestrator | 17:27:44.376 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.377271 | orchestrator | 17:27:44.376 STDOUT terraform:  } 2025-05-31 17:27:44.377274 | orchestrator | 17:27:44.376 STDOUT terraform:  + network { 2025-05-31 17:27:44.377278 | orchestrator | 17:27:44.376 STDOUT terraform:  + access_network = false 2025-05-31 17:27:44.377282 | orchestrator | 17:27:44.376 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 17:27:44.377286 | orchestrator | 17:27:44.376 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 17:27:44.377289 | orchestrator | 17:27:44.376 STDOUT terraform:  + mac = (known after apply) 2025-05-31 17:27:44.377293 | orchestrator | 17:27:44.376 STDOUT terraform:  + name = (known after apply) 2025-05-31 17:27:44.377297 | orchestrator | 17:27:44.377 STDOUT terraform:  + port = (known after apply) 2025-05-31 17:27:44.377301 | orchestrator | 17:27:44.377 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 17:27:44.377304 | orchestrator | 17:27:44.377 STDOUT terraform:  } 2025-05-31 17:27:44.377308 | orchestrator | 17:27:44.377 STDOUT terraform:  } 2025-05-31 17:27:44.377312 | orchestrator | 17:27:44.377 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-31 17:27:44.377315 | orchestrator | 17:27:44.377 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-31 17:27:44.377319 | orchestrator | 17:27:44.377 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-31 17:27:44.377325 | orchestrator | 17:27:44.377 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.377328 | orchestrator | 17:27:44.377 STDOUT terraform:  + name = "testbed" 2025-05-31 17:27:44.377332 | orchestrator | 17:27:44.377 STDOUT terraform:  + private_key = (sensitive value) 2025-05-31 17:27:44.377336 | orchestrator | 17:27:44.377 STDOUT terraform:  + public_key = (known after apply) 2025-05-31 17:27:44.377340 | orchestrator | 17:27:44.377 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.377343 | orchestrator | 17:27:44.377 STDOUT terraform:  + user_id = (known after apply) 2025-05-31 17:27:44.377347 | orchestrator | 17:27:44.377 STDOUT terraform:  } 2025-05-31 17:27:44.377382 | orchestrator | 17:27:44.377 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-31 17:27:44.377426 | orchestrator | 17:27:44.377 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.377451 | orchestrator | 17:27:44.377 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.377479 | orchestrator | 17:27:44.377 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.377506 | orchestrator | 17:27:44.377 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.377535 | orchestrator | 17:27:44.377 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.377562 | orchestrator | 17:27:44.377 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.377571 | orchestrator | 17:27:44.377 STDOUT terraform:  } 2025-05-31 17:27:44.377646 | orchestrator | 17:27:44.377 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-31 17:27:44.377690 | orchestrator | 17:27:44.377 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.377715 | orchestrator | 17:27:44.377 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.377744 | orchestrator | 17:27:44.377 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.377781 | orchestrator | 17:27:44.377 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.377810 | orchestrator | 17:27:44.377 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.377838 | orchestrator | 17:27:44.377 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.377844 | orchestrator | 17:27:44.377 STDOUT terraform:  } 2025-05-31 17:27:44.377905 | orchestrator | 17:27:44.377 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-31 17:27:44.377944 | orchestrator | 17:27:44.377 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.377972 | orchestrator | 17:27:44.377 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.378000 | orchestrator | 17:27:44.377 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.378040 | orchestrator | 17:27:44.377 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.378068 | orchestrator | 17:27:44.378 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.378095 | orchestrator | 17:27:44.378 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.378102 | orchestrator | 17:27:44.378 STDOUT terraform:  } 2025-05-31 17:27:44.378155 | orchestrator | 17:27:44.378 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-31 17:27:44.378202 | orchestrator | 17:27:44.378 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.378230 | orchestrator | 17:27:44.378 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.378258 | orchestrator | 17:27:44.378 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.378289 | orchestrator | 17:27:44.378 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.378317 | orchestrator | 17:27:44.378 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.378345 | orchestrator | 17:27:44.378 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.378360 | orchestrator | 17:27:44.378 STDOUT terraform:  } 2025-05-31 17:27:44.378409 | orchestrator | 17:27:44.378 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-31 17:27:44.378456 | orchestrator | 17:27:44.378 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.378484 | orchestrator | 17:27:44.378 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.378512 | orchestrator | 17:27:44.378 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.378541 | orchestrator | 17:27:44.378 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.378569 | orchestrator | 17:27:44.378 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.378599 | orchestrator | 17:27:44.378 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.378605 | orchestrator | 17:27:44.378 STDOUT terraform:  } 2025-05-31 17:27:44.378656 | orchestrator | 17:27:44.378 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-31 17:27:44.378703 | orchestrator | 17:27:44.378 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.378732 | orchestrator | 17:27:44.378 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.378933 | orchestrator | 17:27:44.378 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.379039 | orchestrator | 17:27:44.378 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.379063 | orchestrator | 17:27:44.378 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.379083 | orchestrator | 17:27:44.378 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.379102 | orchestrator | 17:27:44.378 STDOUT terraform:  } 2025-05-31 17:27:44.379137 | orchestrator | 17:27:44.378 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-31 17:27:44.379159 | orchestrator | 17:27:44.378 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.379177 | orchestrator | 17:27:44.378 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.379196 | orchestrator | 17:27:44.378 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.379214 | orchestrator | 17:27:44.379 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.379232 | orchestrator | 17:27:44.379 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.379251 | orchestrator | 17:27:44.379 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.379271 | orchestrator | 17:27:44.379 STDOUT terraform:  } 2025-05-31 17:27:44.379297 | orchestrator | 17:27:44.379 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-31 17:27:44.379318 | orchestrator | 17:27:44.379 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.379335 | orchestrator | 17:27:44.379 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.379353 | orchestrator | 17:27:44.379 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.379372 | orchestrator | 17:27:44.379 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.379391 | orchestrator | 17:27:44.379 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.379418 | orchestrator | 17:27:44.379 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.379438 | orchestrator | 17:27:44.379 STDOUT terraform:  } 2025-05-31 17:27:44.379458 | orchestrator | 17:27:44.379 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-31 17:27:44.379507 | orchestrator | 17:27:44.379 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 17:27:44.379533 | orchestrator | 17:27:44.379 STDOUT terraform:  + device = (known after apply) 2025-05-31 17:27:44.379554 | orchestrator | 17:27:44.379 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.379573 | orchestrator | 17:27:44.379 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 17:27:44.379592 | orchestrator | 17:27:44.379 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.379611 | orchestrator | 17:27:44.379 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 17:27:44.379630 | orchestrator | 17:27:44.379 STDOUT terraform:  } 2025-05-31 17:27:44.379655 | orchestrator | 17:27:44.379 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-31 17:27:44.379676 | orchestrator | 17:27:44.379 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-31 17:27:44.379696 | orchestrator | 17:27:44.379 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-31 17:27:44.379723 | orchestrator | 17:27:44.379 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-31 17:27:44.379774 | orchestrator | 17:27:44.379 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.379803 | orchestrator | 17:27:44.379 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 17:27:44.379822 | orchestrator | 17:27:44.379 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.379840 | orchestrator | 17:27:44.379 STDOUT terraform:  } 2025-05-31 17:27:44.379863 | orchestrator | 17:27:44.379 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-31 17:27:44.379890 | orchestrator | 17:27:44.379 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-31 17:27:44.379910 | orchestrator | 17:27:44.379 STDOUT terraform:  + address = (known after apply) 2025-05-31 17:27:44.379933 | orchestrator | 17:27:44.379 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.379958 | orchestrator | 17:27:44.379 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-31 17:27:44.379978 | orchestrator | 17:27:44.379 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.380003 | orchestrator | 17:27:44.379 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-31 17:27:44.380023 | orchestrator | 17:27:44.379 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.380047 | orchestrator | 17:27:44.379 STDOUT terraform:  + pool = "public" 2025-05-31 17:27:44.380067 | orchestrator | 17:27:44.380 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 17:27:44.380092 | orchestrator | 17:27:44.380 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.380128 | orchestrator | 17:27:44.380 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.380152 | orchestrator | 17:27:44.380 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.380171 | orchestrator | 17:27:44.380 STDOUT terraform:  } 2025-05-31 17:27:44.380191 | orchestrator | 17:27:44.380 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-31 17:27:44.380230 | orchestrator | 17:27:44.380 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-31 17:27:44.380247 | orchestrator | 17:27:44.380 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.380268 | orchestrator | 17:27:44.380 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.380286 | orchestrator | 17:27:44.380 STDOUT terraform:  + availability_zone_hints = [ 2025-05-31 17:27:44.380307 | orchestrator | 17:27:44.380 STDOUT terraform:  + "nova", 2025-05-31 17:27:44.380325 | orchestrator | 17:27:44.380 STDOUT terraform:  ] 2025-05-31 17:27:44.380345 | orchestrator | 17:27:44.380 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-31 17:27:44.380366 | orchestrator | 17:27:44.380 STDOUT terraform:  + external = (known after apply) 2025-05-31 17:27:44.380388 | orchestrator | 17:27:44.380 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.380438 | orchestrator | 17:27:44.380 STDOUT terraform:  + mtu = (known after apply) 2025-05-31 17:27:44.380462 | orchestrator | 17:27:44.380 STDOUT terraform:  + name = "net-testbed-management" 2025-05-31 17:27:44.380500 | orchestrator | 17:27:44.380 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.380539 | orchestrator | 17:27:44.380 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.380562 | orchestrator | 17:27:44.380 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.380610 | orchestrator | 17:27:44.380 STDOUT terraform:  + shared = (known after apply) 2025-05-31 17:27:44.380633 | orchestrator | 17:27:44.380 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.380671 | orchestrator | 17:27:44.380 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-31 17:27:44.380708 | orchestrator | 17:27:44.380 STDOUT terraform:  + segments (known after apply) 2025-05-31 17:27:44.380726 | orchestrator | 17:27:44.380 STDOUT terraform:  } 2025-05-31 17:27:44.380828 | orchestrator | 17:27:44.380 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-31 17:27:44.380856 | orchestrator | 17:27:44.380 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-31 17:27:44.380912 | orchestrator | 17:27:44.380 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.380937 | orchestrator | 17:27:44.380 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 17:27:44.380958 | orchestrator | 17:27:44.380 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 17:27:44.380997 | orchestrator | 17:27:44.380 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.381021 | orchestrator | 17:27:44.380 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 17:27:44.381058 | orchestrator | 17:27:44.381 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 17:27:44.381084 | orchestrator | 17:27:44.381 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 17:27:44.381136 | orchestrator | 17:27:44.381 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.381177 | orchestrator | 17:27:44.381 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.381200 | orchestrator | 17:27:44.381 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 17:27:44.381221 | orchestrator | 17:27:44.381 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.381275 | orchestrator | 17:27:44.381 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.381299 | orchestrator | 17:27:44.381 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.381322 | orchestrator | 17:27:44.381 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.381371 | orchestrator | 17:27:44.381 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 17:27:44.381395 | orchestrator | 17:27:44.381 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.381418 | orchestrator | 17:27:44.381 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.381441 | orchestrator | 17:27:44.381 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 17:27:44.381461 | orchestrator | 17:27:44.381 STDOUT terraform:  } 2025-05-31 17:27:44.381485 | orchestrator | 17:27:44.381 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.381504 | orchestrator | 17:27:44.381 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 17:27:44.381521 | orchestrator | 17:27:44.381 STDOUT terraform:  } 2025-05-31 17:27:44.381544 | orchestrator | 17:27:44.381 STDOUT terraform:  + binding (known after apply) 2025-05-31 17:27:44.381563 | orchestrator | 17:27:44.381 STDOUT terraform:  + fixed_ip { 2025-05-31 17:27:44.381581 | orchestrator | 17:27:44.381 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-31 17:27:44.381603 | orchestrator | 17:27:44.381 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.381622 | orchestrator | 17:27:44.381 STDOUT terraform:  } 2025-05-31 17:27:44.381640 | orchestrator | 17:27:44.381 STDOUT terraform:  } 2025-05-31 17:27:44.381663 | orchestrator | 17:27:44.381 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-31 17:27:44.381680 | orchestrator | 17:27:44.381 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 17:27:44.381702 | orchestrator | 17:27:44.381 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.381724 | orchestrator | 17:27:44.381 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 17:27:44.381821 | orchestrator | 17:27:44.381 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 17:27:44.381842 | orchestrator | 17:27:44.381 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.381864 | orchestrator | 17:27:44.381 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 17:27:44.381887 | orchestrator | 17:27:44.381 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 17:27:44.381907 | orchestrator | 17:27:44.381 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 17:27:44.381939 | orchestrator | 17:27:44.381 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.381999 | orchestrator | 17:27:44.381 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.382064 | orchestrator | 17:27:44.381 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 17:27:44.382083 | orchestrator | 17:27:44.382 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.382105 | orchestrator | 17:27:44.382 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.382128 | orchestrator | 17:27:44.382 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.382151 | orchestrator | 17:27:44.382 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.382221 | orchestrator | 17:27:44.382 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 17:27:44.382239 | orchestrator | 17:27:44.382 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.382259 | orchestrator | 17:27:44.382 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.382274 | orchestrator | 17:27:44.382 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 17:27:44.382293 | orchestrator | 17:27:44.382 STDOUT terraform:  } 2025-05-31 17:27:44.382308 | orchestrator | 17:27:44.382 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.382327 | orchestrator | 17:27:44.382 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 17:27:44.382343 | orchestrator | 17:27:44.382 STDOUT terraform:  } 2025-05-31 17:27:44.382362 | orchestrator | 17:27:44.382 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.382377 | orchestrator | 17:27:44.382 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 17:27:44.382396 | orchestrator | 17:27:44.382 STDOUT terraform:  } 2025-05-31 17:27:44.382411 | orchestrator | 17:27:44.382 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.382431 | orchestrator | 17:27:44.382 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 17:27:44.382446 | orchestrator | 17:27:44.382 STDOUT terraform:  } 2025-05-31 17:27:44.382465 | orchestrator | 17:27:44.382 STDOUT terraform:  + binding (known after apply) 2025-05-31 17:27:44.382480 | orchestrator | 17:27:44.382 STDOUT terraform:  + fixed_ip { 2025-05-31 17:27:44.382495 | orchestrator | 17:27:44.382 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-31 17:27:44.382514 | orchestrator | 17:27:44.382 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.382530 | orchestrator | 17:27:44.382 STDOUT terraform:  } 2025-05-31 17:27:44.382544 | orchestrator | 17:27:44.382 STDOUT terraform:  } 2025-05-31 17:27:44.382563 | orchestrator | 17:27:44.382 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-31 17:27:44.382608 | orchestrator | 17:27:44.382 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 17:27:44.382628 | orchestrator | 17:27:44.382 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.382671 | orchestrator | 17:27:44.382 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 17:27:44.382698 | orchestrator | 17:27:44.382 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 17:27:44.382794 | orchestrator | 17:27:44.382 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.382811 | orchestrator | 17:27:44.382 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 17:27:44.382827 | orchestrator | 17:27:44.382 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 17:27:44.382843 | orchestrator | 17:27:44.382 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 17:27:44.382858 | orchestrator | 17:27:44.382 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.382915 | orchestrator | 17:27:44.382 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.382935 | orchestrator | 17:27:44.382 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 17:27:44.382975 | orchestrator | 17:27:44.382 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.382992 | orchestrator | 17:27:44.382 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.383045 | orchestrator | 17:27:44.382 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.383065 | orchestrator | 17:27:44.383 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.383104 | orchestrator | 17:27:44.383 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 17:27:44.383122 | orchestrator | 17:27:44.383 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.383138 | orchestrator | 17:27:44.383 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.383179 | orchestrator | 17:27:44.383 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 17:27:44.383195 | orchestrator | 17:27:44.383 STDOUT terraform:  } 2025-05-31 17:27:44.383213 | orchestrator | 17:27:44.383 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.383226 | orchestrator | 17:27:44.383 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 17:27:44.383243 | orchestrator | 17:27:44.383 STDOUT terraform:  } 2025-05-31 17:27:44.383256 | orchestrator | 17:27:44.383 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.383272 | orchestrator | 17:27:44.383 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 17:27:44.383285 | orchestrator | 17:27:44.383 STDOUT terraform:  } 2025-05-31 17:27:44.383302 | orchestrator | 17:27:44.383 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.383318 | orchestrator | 17:27:44.383 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 17:27:44.383337 | orchestrator | 17:27:44.383 STDOUT terraform:  } 2025-05-31 17:27:44.383355 | orchestrator | 17:27:44.383 STDOUT terraform:  + binding (known after apply) 2025-05-31 17:27:44.383371 | orchestrator | 17:27:44.383 STDOUT terraform:  + fixed_ip { 2025-05-31 17:27:44.383388 | orchestrator | 17:27:44.383 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-31 17:27:44.383404 | orchestrator | 17:27:44.383 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.383416 | orchestrator | 17:27:44.383 STDOUT terraform:  } 2025-05-31 17:27:44.383448 | orchestrator | 17:27:44.383 STDOUT terraform:  } 2025-05-31 17:27:44.383465 | orchestrator | 17:27:44.383 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-31 17:27:44.383517 | orchestrator | 17:27:44.383 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 17:27:44.383537 | orchestrator | 17:27:44.383 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.383568 | orchestrator | 17:27:44.383 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 17:27:44.383612 | orchestrator | 17:27:44.383 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 17:27:44.383632 | orchestrator | 17:27:44.383 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.383676 | orchestrator | 17:27:44.383 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 17:27:44.383695 | orchestrator | 17:27:44.383 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 17:27:44.383736 | orchestrator | 17:27:44.383 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 17:27:44.383772 | orchestrator | 17:27:44.383 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.383996 | orchestrator | 17:27:44.383 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.384087 | orchestrator | 17:27:44.383 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 17:27:44.384105 | orchestrator | 17:27:44.383 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.384117 | orchestrator | 17:27:44.383 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.384140 | orchestrator | 17:27:44.383 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.384151 | orchestrator | 17:27:44.383 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.384162 | orchestrator | 17:27:44.383 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 17:27:44.384173 | orchestrator | 17:27:44.384 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.384184 | orchestrator | 17:27:44.384 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.384195 | orchestrator | 17:27:44.384 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 17:27:44.384207 | orchestrator | 17:27:44.384 STDOUT terraform:  } 2025-05-31 17:27:44.384218 | orchestrator | 17:27:44.384 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.384507 | orchestrator | 17:27:44.384 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 17:27:44.384530 | orchestrator | 17:27:44.384 STDOUT terraform:  } 2025-05-31 17:27:44.384545 | orchestrator | 17:27:44.384 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.384584 | orchestrator | 17:27:44.384 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 17:27:44.384600 | orchestrator | 17:27:44.384 STDOUT terraform:  } 2025-05-31 17:27:44.384614 | orchestrator | 17:27:44.384 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.384645 | orchestrator | 17:27:44.384 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 17:27:44.384681 | orchestrator | 17:27:44.384 STDOUT terraform:  } 2025-05-31 17:27:44.384693 | orchestrator | 17:27:44.384 STDOUT terraform:  + binding (known after apply) 2025-05-31 17:27:44.384708 | orchestrator | 17:27:44.384 STDOUT terraform:  + fixed_ip { 2025-05-31 17:27:44.384723 | orchestrator | 17:27:44.384 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-31 17:27:44.384776 | orchestrator | 17:27:44.384 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.384793 | orchestrator | 17:27:44.384 STDOUT terraform:  } 2025-05-31 17:27:44.384808 | orchestrator | 17:27:44.384 STDOUT terraform:  } 2025-05-31 17:27:44.384866 | orchestrator | 17:27:44.384 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-31 17:27:44.384916 | orchestrator | 17:27:44.384 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 17:27:44.384957 | orchestrator | 17:27:44.384 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.385007 | orchestrator | 17:27:44.384 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 17:27:44.385024 | orchestrator | 17:27:44.384 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 17:27:44.385083 | orchestrator | 17:27:44.385 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.385100 | orchestrator | 17:27:44.385 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 17:27:44.385138 | orchestrator | 17:27:44.385 STDOUT terraform:  + device_owner = (known after 2025-05-31 17:27:44.385201 | orchestrator | 17:27:44.385 STDOUT terraform:  apply) 2025-05-31 17:27:44.385241 | orchestrator | 17:27:44.385 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 17:27:44.391947 | orchestrator | 17:27:44.385 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.392011 | orchestrator | 17:27:44.385 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.392023 | orchestrator | 17:27:44.385 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 17:27:44.392034 | orchestrator | 17:27:44.385 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.392044 | orchestrator | 17:27:44.385 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.392053 | orchestrator | 17:27:44.385 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.392063 | orchestrator | 17:27:44.385 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.392072 | orchestrator | 17:27:44.385 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 17:27:44.392082 | orchestrator | 17:27:44.385 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.392092 | orchestrator | 17:27:44.385 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392102 | orchestrator | 17:27:44.385 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 17:27:44.392112 | orchestrator | 17:27:44.385 STDOUT terraform:  } 2025-05-31 17:27:44.392122 | orchestrator | 17:27:44.385 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392146 | orchestrator | 17:27:44.385 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 17:27:44.392156 | orchestrator | 17:27:44.385 STDOUT terraform:  } 2025-05-31 17:27:44.392166 | orchestrator | 17:27:44.385 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392175 | orchestrator | 17:27:44.385 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 17:27:44.392185 | orchestrator | 17:27:44.385 STDOUT terraform:  } 2025-05-31 17:27:44.392194 | orchestrator | 17:27:44.385 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392204 | orchestrator | 17:27:44.386 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 17:27:44.392213 | orchestrator | 17:27:44.386 STDOUT terraform:  } 2025-05-31 17:27:44.392223 | orchestrator | 17:27:44.386 STDOUT terraform:  + binding (known after apply) 2025-05-31 17:27:44.392233 | orchestrator | 17:27:44.386 STDOUT terraform:  + fixed_ip { 2025-05-31 17:27:44.392242 | orchestrator | 17:27:44.386 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-31 17:27:44.392252 | orchestrator | 17:27:44.386 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.392262 | orchestrator | 17:27:44.386 STDOUT terraform:  } 2025-05-31 17:27:44.392272 | orchestrator | 17:27:44.386 STDOUT terraform:  } 2025-05-31 17:27:44.392282 | orchestrator | 17:27:44.387 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-31 17:27:44.392292 | orchestrator | 17:27:44.387 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 17:27:44.392302 | orchestrator | 17:27:44.387 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.392311 | orchestrator | 17:27:44.387 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 17:27:44.392321 | orchestrator | 17:27:44.387 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 17:27:44.392340 | orchestrator | 17:27:44.387 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.392350 | orchestrator | 17:27:44.387 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 17:27:44.392359 | orchestrator | 17:27:44.387 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 17:27:44.392369 | orchestrator | 17:27:44.387 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 17:27:44.392378 | orchestrator | 17:27:44.388 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.392388 | orchestrator | 17:27:44.388 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.392413 | orchestrator | 17:27:44.388 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 17:27:44.392423 | orchestrator | 17:27:44.388 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.392432 | orchestrator | 17:27:44.388 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.392442 | orchestrator | 17:27:44.388 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.392456 | orchestrator | 17:27:44.388 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.392465 | orchestrator | 17:27:44.388 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 17:27:44.392481 | orchestrator | 17:27:44.388 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.392491 | orchestrator | 17:27:44.388 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392500 | orchestrator | 17:27:44.388 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 17:27:44.392510 | orchestrator | 17:27:44.389 STDOUT terraform:  } 2025-05-31 17:27:44.392519 | orchestrator | 17:27:44.389 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392530 | orchestrator | 17:27:44.389 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 17:27:44.392547 | orchestrator | 17:27:44.389 STDOUT terraform:  } 2025-05-31 17:27:44.392564 | orchestrator | 17:27:44.389 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392581 | orchestrator | 17:27:44.389 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 17:27:44.392599 | orchestrator | 17:27:44.389 STDOUT terraform:  } 2025-05-31 17:27:44.392610 | orchestrator | 17:27:44.389 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.392619 | orchestrator | 17:27:44.389 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 17:27:44.392629 | orchestrator | 17:27:44.389 STDOUT terraform:  } 2025-05-31 17:27:44.392638 | orchestrator | 17:27:44.389 STDOUT terraform:  + binding (known after apply) 2025-05-31 17:27:44.392648 | orchestrator | 17:27:44.389 STDOUT terraform:  + fixed_ip { 2025-05-31 17:27:44.392657 | orchestrator | 17:27:44.389 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-31 17:27:44.392667 | orchestrator | 17:27:44.389 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.392676 | orchestrator | 17:27:44.389 STDOUT terraform:  } 2025-05-31 17:27:44.392686 | orchestrator | 17:27:44.389 STDOUT terraform:  } 2025-05-31 17:27:44.392695 | orchestrator | 17:27:44.389 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-31 17:27:44.392705 | orchestrator | 17:27:44.390 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 17:27:44.392715 | orchestrator | 17:27:44.390 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.392724 | orchestrator | 17:27:44.390 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 17:27:44.392734 | orchestrator | 17:27:44.390 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 17:27:44.392743 | orchestrator | 17:27:44.390 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.392770 | orchestrator | 17:27:44.390 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 17:27:44.392780 | orchestrator | 17:27:44.390 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 17:27:44.392789 | orchestrator | 17:27:44.390 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 17:27:44.392799 | orchestrator | 17:27:44.391 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 17:27:44.392808 | orchestrator | 17:27:44.391 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.392825 | orchestrator | 17:27:44.391 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 17:27:44.392835 | orchestrator | 17:27:44.391 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.392844 | orchestrator | 17:27:44.391 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 17:27:44.392860 | orchestrator | 17:27:44.391 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 17:27:44.392870 | orchestrator | 17:27:44.391 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.392916 | orchestrator | 17:27:44.391 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 17:27:44.393057 | orchestrator | 17:27:44.392 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.393150 | orchestrator | 17:27:44.393 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.393262 | orchestrator | 17:27:44.393 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 17:27:44.393323 | orchestrator | 17:27:44.393 STDOUT terraform:  } 2025-05-31 17:27:44.393406 | orchestrator | 17:27:44.393 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.393506 | orchestrator | 17:27:44.393 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 17:27:44.393583 | orchestrator | 17:27:44.393 STDOUT terraform:  } 2025-05-31 17:27:44.393661 | orchestrator | 17:27:44.393 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.393787 | orchestrator | 17:27:44.393 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 17:27:44.393851 | orchestrator | 17:27:44.393 STDOUT terraform:  } 2025-05-31 17:27:44.393934 | orchestrator | 17:27:44.393 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 17:27:44.394059 | orchestrator | 17:27:44.393 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 17:27:44.394133 | orchestrator | 17:27:44.394 STDOUT terraform:  } 2025-05-31 17:27:44.394218 | orchestrator | 17:27:44.394 STDOUT terraform:  + binding (known after apply) 2025-05-31 17:27:44.394288 | orchestrator | 17:27:44.394 STDOUT terraform:  + fixed_ip { 2025-05-31 17:27:44.394374 | orchestrator | 17:27:44.394 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-31 17:27:44.394491 | orchestrator | 17:27:44.394 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.394551 | orchestrator | 17:27:44.394 STDOUT terraform:  } 2025-05-31 17:27:44.394608 | orchestrator | 17:27:44.394 STDOUT terraform:  } 2025-05-31 17:27:44.394790 | orchestrator | 17:27:44.394 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-31 17:27:44.394958 | orchestrator | 17:27:44.394 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-31 17:27:44.395092 | orchestrator | 17:27:44.395 STDOUT terraform:  + force_destroy = false 2025-05-31 17:27:44.395214 | orchestrator | 17:27:44.395 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.395320 | orchestrator | 17:27:44.395 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 17:27:44.395415 | orchestrator | 17:27:44.395 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.395517 | orchestrator | 17:27:44.395 STDOUT terraform:  + router_id = (known after apply) 2025-05-31 17:27:44.395624 | orchestrator | 17:27:44.395 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 17:27:44.395686 | orchestrator | 17:27:44.395 STDOUT terraform:  } 2025-05-31 17:27:44.395913 | orchestrator | 17:27:44.395 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-31 17:27:44.396045 | orchestrator | 17:27:44.395 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-31 17:27:44.396162 | orchestrator | 17:27:44.396 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 17:27:44.396286 | orchestrator | 17:27:44.396 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.396370 | orchestrator | 17:27:44.396 STDOUT terraform:  + availability_zone_hints = [ 2025-05-31 17:27:44.396436 | orchestrator | 17:27:44.396 STDOUT terraform:  + "nova", 2025-05-31 17:27:44.396498 | orchestrator | 17:27:44.396 STDOUT terraform:  ] 2025-05-31 17:27:44.396620 | orchestrator | 17:27:44.396 STDOUT terraform:  + distributed = (known after apply) 2025-05-31 17:27:44.396734 | orchestrator | 17:27:44.396 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-31 17:27:44.396915 | orchestrator | 17:27:44.396 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-31 17:27:44.397038 | orchestrator | 17:27:44.396 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.397141 | orchestrator | 17:27:44.397 STDOUT terraform:  + name = "testbed" 2025-05-31 17:27:44.397259 | orchestrator | 17:27:44.397 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.397416 | orchestrator | 17:27:44.397 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.397516 | orchestrator | 17:27:44.397 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-31 17:27:44.397571 | orchestrator | 17:27:44.397 STDOUT terraform:  } 2025-05-31 17:27:44.397735 | orchestrator | 17:27:44.397 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-31 17:27:44.397979 | orchestrator | 17:27:44.397 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-31 17:27:44.398097 | orchestrator | 17:27:44.398 STDOUT terraform:  + description = "ssh" 2025-05-31 17:27:44.398623 | orchestrator | 17:27:44.398 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.398698 | orchestrator | 17:27:44.398 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.398804 | orchestrator | 17:27:44.398 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.398870 | orchestrator | 17:27:44.398 STDOUT terraform:  + port_range_max = 22 2025-05-31 17:27:44.398930 | orchestrator | 17:27:44.398 STDOUT terraform:  + port_range_min = 22 2025-05-31 17:27:44.398989 | orchestrator | 17:27:44.398 STDOUT terraform:  + protocol = "tcp" 2025-05-31 17:27:44.399068 | orchestrator | 17:27:44.399 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.399146 | orchestrator | 17:27:44.399 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.399224 | orchestrator | 17:27:44.399 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 17:27:44.399305 | orchestrator | 17:27:44.399 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.399387 | orchestrator | 17:27:44.399 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.399433 | orchestrator | 17:27:44.399 STDOUT terraform:  } 2025-05-31 17:27:44.399565 | orchestrator | 17:27:44.399 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-31 17:27:44.399692 | orchestrator | 17:27:44.399 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-31 17:27:44.399781 | orchestrator | 17:27:44.399 STDOUT terraform:  + description = "wireguard" 2025-05-31 17:27:44.399849 | orchestrator | 17:27:44.399 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.399908 | orchestrator | 17:27:44.399 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.399987 | orchestrator | 17:27:44.399 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.400044 | orchestrator | 17:27:44.400 STDOUT terraform:  + port_range_max = 51820 2025-05-31 17:27:44.400102 | orchestrator | 17:27:44.400 STDOUT terraform:  + port_range_min = 51820 2025-05-31 17:27:44.400159 | orchestrator | 17:27:44.400 STDOUT terraform:  + protocol = "udp" 2025-05-31 17:27:44.400239 | orchestrator | 17:27:44.400 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.400316 | orchestrator | 17:27:44.400 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.400383 | orchestrator | 17:27:44.400 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 17:27:44.400461 | orchestrator | 17:27:44.400 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.400538 | orchestrator | 17:27:44.400 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.400591 | orchestrator | 17:27:44.400 STDOUT terraform:  } 2025-05-31 17:27:44.400717 | orchestrator | 17:27:44.400 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-31 17:27:44.400902 | orchestrator | 17:27:44.400 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-31 17:27:44.400976 | orchestrator | 17:27:44.400 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.401027 | orchestrator | 17:27:44.400 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.401099 | orchestrator | 17:27:44.401 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.401151 | orchestrator | 17:27:44.401 STDOUT terraform:  + protocol = "tcp" 2025-05-31 17:27:44.401219 | orchestrator | 17:27:44.401 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.401286 | orchestrator | 17:27:44.401 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.401352 | orchestrator | 17:27:44.401 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-31 17:27:44.401419 | orchestrator | 17:27:44.401 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.401488 | orchestrator | 17:27:44.401 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.401534 | orchestrator | 17:27:44.401 STDOUT terraform:  } 2025-05-31 17:27:44.401650 | orchestrator | 17:27:44.401 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-31 17:27:44.401782 | orchestrator | 17:27:44.401 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-31 17:27:44.401845 | orchestrator | 17:27:44.401 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.401900 | orchestrator | 17:27:44.401 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.402000 | orchestrator | 17:27:44.401 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.402075 | orchestrator | 17:27:44.402 STDOUT terraform:  + protocol = "udp" 2025-05-31 17:27:44.402147 | orchestrator | 17:27:44.402 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.402215 | orchestrator | 17:27:44.402 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.402280 | orchestrator | 17:27:44.402 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-31 17:27:44.402346 | orchestrator | 17:27:44.402 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.402413 | orchestrator | 17:27:44.402 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.402449 | orchestrator | 17:27:44.402 STDOUT terraform:  } 2025-05-31 17:27:44.402554 | orchestrator | 17:27:44.402 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-31 17:27:44.402661 | orchestrator | 17:27:44.402 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-31 17:27:44.402717 | orchestrator | 17:27:44.402 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.402788 | orchestrator | 17:27:44.402 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.402859 | orchestrator | 17:27:44.402 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.402910 | orchestrator | 17:27:44.402 STDOUT terraform:  + protocol = "icmp" 2025-05-31 17:27:44.402975 | orchestrator | 17:27:44.402 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.403039 | orchestrator | 17:27:44.402 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.403095 | orchestrator | 17:27:44.403 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 17:27:44.403159 | orchestrator | 17:27:44.403 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.403224 | orchestrator | 17:27:44.403 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.403260 | orchestrator | 17:27:44.403 STDOUT terraform:  } 2025-05-31 17:27:44.403362 | orchestrator | 17:27:44.403 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-31 17:27:44.403465 | orchestrator | 17:27:44.403 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-31 17:27:44.403520 | orchestrator | 17:27:44.403 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.403570 | orchestrator | 17:27:44.403 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.403669 | orchestrator | 17:27:44.403 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.403725 | orchestrator | 17:27:44.403 STDOUT terraform:  + protocol = "tcp" 2025-05-31 17:27:44.403840 | orchestrator | 17:27:44.403 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.403912 | orchestrator | 17:27:44.403 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.403965 | orchestrator | 17:27:44.403 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 17:27:44.404027 | orchestrator | 17:27:44.403 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.404091 | orchestrator | 17:27:44.404 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.404126 | orchestrator | 17:27:44.404 STDOUT terraform:  } 2025-05-31 17:27:44.404222 | orchestrator | 17:27:44.404 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-31 17:27:44.404322 | orchestrator | 17:27:44.404 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-31 17:27:44.404376 | orchestrator | 17:27:44.404 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.404423 | orchestrator | 17:27:44.404 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.404487 | orchestrator | 17:27:44.404 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.404533 | orchestrator | 17:27:44.404 STDOUT terraform:  + protocol = "udp" 2025-05-31 17:27:44.404598 | orchestrator | 17:27:44.404 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.404658 | orchestrator | 17:27:44.404 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.404713 | orchestrator | 17:27:44.404 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 17:27:44.404794 | orchestrator | 17:27:44.404 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.404857 | orchestrator | 17:27:44.404 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.404891 | orchestrator | 17:27:44.404 STDOUT terraform:  } 2025-05-31 17:27:44.404986 | orchestrator | 17:27:44.404 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-31 17:27:44.405082 | orchestrator | 17:27:44.404 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-31 17:27:44.405134 | orchestrator | 17:27:44.405 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.405181 | orchestrator | 17:27:44.405 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.405244 | orchestrator | 17:27:44.405 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.405291 | orchestrator | 17:27:44.405 STDOUT terraform:  + protocol = "icmp" 2025-05-31 17:27:44.405352 | orchestrator | 17:27:44.405 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.405413 | orchestrator | 17:27:44.405 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.405467 | orchestrator | 17:27:44.405 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 17:27:44.405536 | orchestrator | 17:27:44.405 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.405599 | orchestrator | 17:27:44.405 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.405637 | orchestrator | 17:27:44.405 STDOUT terraform:  } 2025-05-31 17:27:44.405732 | orchestrator | 17:27:44.405 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-31 17:27:44.405871 | orchestrator | 17:27:44.405 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-31 17:27:44.405923 | orchestrator | 17:27:44.405 STDOUT terraform:  + description = "vrrp" 2025-05-31 17:27:44.405975 | orchestrator | 17:27:44.405 STDOUT terraform:  + direction = "ingress" 2025-05-31 17:27:44.406056 | orchestrator | 17:27:44.405 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 17:27:44.406124 | orchestrator | 17:27:44.406 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.406176 | orchestrator | 17:27:44.406 STDOUT terraform:  + protocol = "112" 2025-05-31 17:27:44.406240 | orchestrator | 17:27:44.406 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.406303 | orchestrator | 17:27:44.406 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 17:27:44.406355 | orchestrator | 17:27:44.406 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 17:27:44.406416 | orchestrator | 17:27:44.406 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 17:27:44.406477 | orchestrator | 17:27:44.406 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.406512 | orchestrator | 17:27:44.406 STDOUT terraform:  } 2025-05-31 17:27:44.406604 | orchestrator | 17:27:44.406 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-31 17:27:44.406695 | orchestrator | 17:27:44.406 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-31 17:27:44.406788 | orchestrator | 17:27:44.406 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.406861 | orchestrator | 17:27:44.406 STDOUT terraform:  + description = "management security group" 2025-05-31 17:27:44.406921 | orchestrator | 17:27:44.406 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.406980 | orchestrator | 17:27:44.406 STDOUT terraform:  + name = "testbed-management" 2025-05-31 17:27:44.407040 | orchestrator | 17:27:44.406 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.410886 | orchestrator | 17:27:44.407 STDOUT terraform:  + stateful = (known after apply) 2025-05-31 17:27:44.411022 | orchestrator | 17:27:44.410 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.411058 | orchestrator | 17:27:44.411 STDOUT terraform:  } 2025-05-31 17:27:44.411157 | orchestrator | 17:27:44.411 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-31 17:27:44.411240 | orchestrator | 17:27:44.411 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-31 17:27:44.411374 | orchestrator | 17:27:44.411 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.411558 | orchestrator | 17:27:44.411 STDOUT terraform:  + description = "node security group" 2025-05-31 17:27:44.417002 | orchestrator | 17:27:44.416 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.417091 | orchestrator | 17:27:44.417 STDOUT terraform:  + name = "testbed-node" 2025-05-31 17:27:44.417161 | orchestrator | 17:27:44.417 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.417211 | orchestrator | 17:27:44.417 STDOUT terraform:  + stateful = (known after apply) 2025-05-31 17:27:44.417257 | orchestrator | 17:27:44.417 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.417287 | orchestrator | 17:27:44.417 STDOUT terraform:  } 2025-05-31 17:27:44.417355 | orchestrator | 17:27:44.417 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-31 17:27:44.417423 | orchestrator | 17:27:44.417 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-31 17:27:44.417473 | orchestrator | 17:27:44.417 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 17:27:44.417521 | orchestrator | 17:27:44.417 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-31 17:27:44.417557 | orchestrator | 17:27:44.417 STDOUT terraform:  + dns_nameservers = [ 2025-05-31 17:27:44.417588 | orchestrator | 17:27:44.417 STDOUT terraform:  + "8.8.8.8", 2025-05-31 17:27:44.417618 | orchestrator | 17:27:44.417 STDOUT terraform:  + "9.9.9.9", 2025-05-31 17:27:44.417644 | orchestrator | 17:27:44.417 STDOUT terraform:  ] 2025-05-31 17:27:44.417681 | orchestrator | 17:27:44.417 STDOUT terraform:  + enable_dhcp = true 2025-05-31 17:27:44.417725 | orchestrator | 17:27:44.417 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-31 17:27:44.417823 | orchestrator | 17:27:44.417 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.417860 | orchestrator | 17:27:44.417 STDOUT terraform:  + ip_version = 4 2025-05-31 17:27:44.417909 | orchestrator | 17:27:44.417 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-31 17:27:44.417956 | orchestrator | 17:27:44.417 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-31 17:27:44.418036 | orchestrator | 17:27:44.417 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-31 17:27:44.418094 | orchestrator | 17:27:44.418 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 17:27:44.418130 | orchestrator | 17:27:44.418 STDOUT terraform:  + no_gateway = false 2025-05-31 17:27:44.418177 | orchestrator | 17:27:44.418 STDOUT terraform:  + region = (known after apply) 2025-05-31 17:27:44.418222 | orchestrator | 17:27:44.418 STDOUT terraform:  + service_types = (known after apply) 2025-05-31 17:27:44.418268 | orchestrator | 17:27:44.418 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 17:27:44.418298 | orchestrator | 17:27:44.418 STDOUT terraform:  + allocation_pool { 2025-05-31 17:27:44.418334 | orchestrator | 17:27:44.418 STDOUT terraform:  + end = "192.168.31.250" 2025-05-31 17:27:44.418372 | orchestrator | 17:27:44.418 STDOUT terraform:  + start = "192.168.31.200" 2025-05-31 17:27:44.418395 | orchestrator | 17:27:44.418 STDOUT terraform:  } 2025-05-31 17:27:44.418420 | orchestrator | 17:27:44.418 STDOUT terraform:  } 2025-05-31 17:27:44.418467 | orchestrator | 17:27:44.418 STDOUT terraform:  # terraform_data.image will be created 2025-05-31 17:27:44.418537 | orchestrator | 17:27:44.418 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-31 17:27:44.418575 | orchestrator | 17:27:44.418 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.418606 | orchestrator | 17:27:44.418 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-31 17:27:44.418641 | orchestrator | 17:27:44.418 STDOUT terraform:  + output = (known after apply) 2025-05-31 17:27:44.418664 | orchestrator | 17:27:44.418 STDOUT terraform:  } 2025-05-31 17:27:44.418708 | orchestrator | 17:27:44.418 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-31 17:27:44.418774 | orchestrator | 17:27:44.418 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-31 17:27:44.418813 | orchestrator | 17:27:44.418 STDOUT terraform:  + id = (known after apply) 2025-05-31 17:27:44.418846 | orchestrator | 17:27:44.418 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-31 17:27:44.418883 | orchestrator | 17:27:44.418 STDOUT terraform:  + output = (known after apply) 2025-05-31 17:27:44.418907 | orchestrator | 17:27:44.418 STDOUT terraform:  } 2025-05-31 17:27:44.418950 | orchestrator | 17:27:44.418 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-31 17:27:44.418975 | orchestrator | 17:27:44.418 STDOUT terraform: Changes to Outputs: 2025-05-31 17:27:44.419012 | orchestrator | 17:27:44.418 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-31 17:27:44.419048 | orchestrator | 17:27:44.419 STDOUT terraform:  + private_key = (sensitive value) 2025-05-31 17:27:44.543507 | orchestrator | 17:27:44.543 STDOUT terraform: terraform_data.image: Creating... 2025-05-31 17:27:44.544028 | orchestrator | 17:27:44.543 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=202670da-7613-e5ed-bc6d-35b314a760e0] 2025-05-31 17:27:44.621838 | orchestrator | 17:27:44.621 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-31 17:27:44.622353 | orchestrator | 17:27:44.622 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=df2f994e-f305-95a6-c158-b071806dba2e] 2025-05-31 17:27:44.653173 | orchestrator | 17:27:44.652 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-31 17:27:44.654162 | orchestrator | 17:27:44.653 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-31 17:27:44.657310 | orchestrator | 17:27:44.657 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-31 17:27:44.662969 | orchestrator | 17:27:44.662 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-31 17:27:44.663038 | orchestrator | 17:27:44.662 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-31 17:27:44.665498 | orchestrator | 17:27:44.663 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-31 17:27:44.665526 | orchestrator | 17:27:44.663 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-31 17:27:44.665531 | orchestrator | 17:27:44.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-31 17:27:44.665535 | orchestrator | 17:27:44.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-31 17:27:44.667092 | orchestrator | 17:27:44.666 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-31 17:27:45.088397 | orchestrator | 17:27:45.088 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-31 17:27:45.091278 | orchestrator | 17:27:45.090 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-31 17:27:45.096921 | orchestrator | 17:27:45.096 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-31 17:27:45.098262 | orchestrator | 17:27:45.098 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-31 17:27:45.192489 | orchestrator | 17:27:45.192 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-31 17:27:45.204365 | orchestrator | 17:27:45.204 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-31 17:27:51.141949 | orchestrator | 17:27:51.141 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=29564f73-823b-4d34-9287-f4854e02ceec] 2025-05-31 17:27:51.155541 | orchestrator | 17:27:51.155 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-31 17:27:54.661981 | orchestrator | 17:27:54.661 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-31 17:27:54.664032 | orchestrator | 17:27:54.663 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-31 17:27:54.665152 | orchestrator | 17:27:54.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-31 17:27:54.666463 | orchestrator | 17:27:54.665 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-31 17:27:54.666558 | orchestrator | 17:27:54.666 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-31 17:27:54.667441 | orchestrator | 17:27:54.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-31 17:27:55.097565 | orchestrator | 17:27:55.097 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-31 17:27:55.099811 | orchestrator | 17:27:55.099 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-31 17:27:55.205574 | orchestrator | 17:27:55.205 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-31 17:27:55.236408 | orchestrator | 17:27:55.236 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=ea561065-8c5e-4872-9ad2-33dbafedf722] 2025-05-31 17:27:55.236505 | orchestrator | 17:27:55.236 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=f84cbf5b-ba0c-4d94-b93b-9923118940b1] 2025-05-31 17:27:55.249438 | orchestrator | 17:27:55.246 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-31 17:27:55.249502 | orchestrator | 17:27:55.247 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=f2aa5dc5-a894-45ad-9ad9-532b3655830f] 2025-05-31 17:27:55.249512 | orchestrator | 17:27:55.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-31 17:27:55.251153 | orchestrator | 17:27:55.250 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=45bd85ec0479224fcf96d0bed468fcf710f4aad0] 2025-05-31 17:27:55.253388 | orchestrator | 17:27:55.253 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=2e422299-cbcf-4708-b851-70f90cfc06ca] 2025-05-31 17:27:55.256876 | orchestrator | 17:27:55.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-31 17:27:55.258692 | orchestrator | 17:27:55.258 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-31 17:27:55.260810 | orchestrator | 17:27:55.260 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-31 17:27:55.262699 | orchestrator | 17:27:55.262 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=d890c2c1fdf5a0d00343e909df9a3e5a6073959e] 2025-05-31 17:27:55.268645 | orchestrator | 17:27:55.268 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=6d2138a6-e50d-4faf-8e5d-bfea12855062] 2025-05-31 17:27:55.273504 | orchestrator | 17:27:55.273 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-31 17:27:55.279499 | orchestrator | 17:27:55.279 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=5d12a1f6-9201-4628-be3a-bd91c24a8cbc] 2025-05-31 17:27:55.281568 | orchestrator | 17:27:55.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-31 17:27:55.285177 | orchestrator | 17:27:55.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-31 17:27:55.327398 | orchestrator | 17:27:55.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=295a1d91-ab6c-4ff1-816e-ba1788a61c02] 2025-05-31 17:27:55.336566 | orchestrator | 17:27:55.336 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-31 17:27:55.336688 | orchestrator | 17:27:55.336 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=3e5784f3-90d1-4176-aeed-b0a2fc46b5a2] 2025-05-31 17:27:55.396041 | orchestrator | 17:27:55.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=a03f9f47-10f6-44ba-9f7a-a1088b199422] 2025-05-31 17:28:01.156928 | orchestrator | 17:28:01.156 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-31 17:28:01.192848 | orchestrator | 17:28:01.192 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=559552af-8fab-42db-b8b2-a21f2483c3fe] 2025-05-31 17:28:01.203135 | orchestrator | 17:28:01.202 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-31 17:28:01.462299 | orchestrator | 17:28:01.461 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=d3cdc349-cad5-4e81-9fe0-2883f1909f71] 2025-05-31 17:28:05.250951 | orchestrator | 17:28:05.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-31 17:28:05.256960 | orchestrator | 17:28:05.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-31 17:28:05.262419 | orchestrator | 17:28:05.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-31 17:28:05.274842 | orchestrator | 17:28:05.274 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-31 17:28:05.282260 | orchestrator | 17:28:05.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-31 17:28:05.286361 | orchestrator | 17:28:05.286 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-31 17:28:05.632305 | orchestrator | 17:28:05.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=0157688a-bccf-419f-b6fc-2522880e7598] 2025-05-31 17:28:05.637990 | orchestrator | 17:28:05.637 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=7f9620b2-a7fc-4a95-a3fa-f5bb2349a7cd] 2025-05-31 17:28:05.662471 | orchestrator | 17:28:05.662 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=2671ba96-017e-4551-bbb7-2b659dcc704c] 2025-05-31 17:28:05.679526 | orchestrator | 17:28:05.679 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=2d9e0215-0f9e-4bea-a007-681dc1a29b08] 2025-05-31 17:28:05.684266 | orchestrator | 17:28:05.683 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=4ecad02d-deb4-4406-975e-e25f83acaad2] 2025-05-31 17:28:05.688516 | orchestrator | 17:28:05.688 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=1760f04e-6ed9-4a7f-ab9e-7dcdb811d24b] 2025-05-31 17:28:08.995487 | orchestrator | 17:28:08.995 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=c2becfb6-54d5-48e9-9a4b-ad589325242e] 2025-05-31 17:28:09.002117 | orchestrator | 17:28:09.001 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-31 17:28:09.002728 | orchestrator | 17:28:09.002 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-31 17:28:09.003573 | orchestrator | 17:28:09.003 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-31 17:28:09.208820 | orchestrator | 17:28:09.208 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ab61af02-5c61-451d-b767-cd30c362e6c7] 2025-05-31 17:28:09.221622 | orchestrator | 17:28:09.221 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-31 17:28:09.231477 | orchestrator | 17:28:09.231 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-31 17:28:09.231684 | orchestrator | 17:28:09.231 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-31 17:28:09.233239 | orchestrator | 17:28:09.233 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-31 17:28:09.233395 | orchestrator | 17:28:09.233 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-31 17:28:09.234776 | orchestrator | 17:28:09.234 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-31 17:28:09.235912 | orchestrator | 17:28:09.235 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-31 17:28:09.238534 | orchestrator | 17:28:09.238 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-31 17:28:09.255111 | orchestrator | 17:28:09.254 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=81f6b330-5cdf-4040-bb35-e49380073069] 2025-05-31 17:28:09.262804 | orchestrator | 17:28:09.262 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-31 17:28:09.446322 | orchestrator | 17:28:09.445 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=1fe6cac4-e1da-4567-a013-60eaab80f6ac] 2025-05-31 17:28:09.464930 | orchestrator | 17:28:09.464 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-31 17:28:09.656442 | orchestrator | 17:28:09.656 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=df7a96d8-edc2-4be5-96b3-c49449975b0f] 2025-05-31 17:28:09.663547 | orchestrator | 17:28:09.663 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-31 17:28:09.840729 | orchestrator | 17:28:09.840 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=d6549f53-bb74-4968-972f-1440484c136a] 2025-05-31 17:28:09.849905 | orchestrator | 17:28:09.849 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-31 17:28:09.888023 | orchestrator | 17:28:09.887 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3fdf2e10-7320-4439-b19f-c0caf478bdf6] 2025-05-31 17:28:09.895205 | orchestrator | 17:28:09.894 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-31 17:28:09.982887 | orchestrator | 17:28:09.982 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=6279a10a-faef-46ea-8153-baf8f51530e5] 2025-05-31 17:28:09.992847 | orchestrator | 17:28:09.992 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-31 17:28:10.034959 | orchestrator | 17:28:10.034 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=00a1fa16-7bb9-4b63-8d0f-d9f50c62adc5] 2025-05-31 17:28:10.041948 | orchestrator | 17:28:10.041 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-31 17:28:10.173663 | orchestrator | 17:28:10.173 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=8ef5a273-00fb-4b28-a3a6-ffd34d7cf543] 2025-05-31 17:28:10.187878 | orchestrator | 17:28:10.187 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-31 17:28:10.318864 | orchestrator | 17:28:10.318 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=60a2e6ce-9b74-4156-9acc-0222f9e32afd] 2025-05-31 17:28:10.607900 | orchestrator | 17:28:10.607 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=edc15ec9-dc61-4a5f-8979-4aae7a5672a2] 2025-05-31 17:28:14.829093 | orchestrator | 17:28:14.828 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=1364e540-5aee-4b78-8b36-c318c357802a] 2025-05-31 17:28:14.874642 | orchestrator | 17:28:14.874 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=6b0096b1-8290-42f7-967a-0bc4614057a2] 2025-05-31 17:28:14.890442 | orchestrator | 17:28:14.890 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=83eb919c-91e8-4907-b8be-39d0628d6491] 2025-05-31 17:28:15.054783 | orchestrator | 17:28:15.054 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=738d7ae6-1882-48e0-8ea1-adfa1b8ee13a] 2025-05-31 17:28:15.241746 | orchestrator | 17:28:15.241 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=7dc907e5-d640-4730-a678-df3e639e08bd] 2025-05-31 17:28:15.378502 | orchestrator | 17:28:15.378 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=6ad49353-2c1b-48f9-ae14-3c65b111d3a8] 2025-05-31 17:28:15.639063 | orchestrator | 17:28:15.638 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=570bea25-8fb0-41ea-a727-c1bded8036d7] 2025-05-31 17:28:16.829816 | orchestrator | 17:28:16.829 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=245f9c09-a40a-498b-99ff-fb9b8ad0f722] 2025-05-31 17:28:16.844242 | orchestrator | 17:28:16.844 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-31 17:28:16.855164 | orchestrator | 17:28:16.855 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-31 17:28:16.866163 | orchestrator | 17:28:16.866 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-31 17:28:16.867873 | orchestrator | 17:28:16.867 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-31 17:28:16.869557 | orchestrator | 17:28:16.869 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-31 17:28:16.872072 | orchestrator | 17:28:16.871 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-31 17:28:16.876461 | orchestrator | 17:28:16.876 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-31 17:28:23.209892 | orchestrator | 17:28:23.209 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=ed0a6c16-f341-4f4f-9565-0d20e4d06146] 2025-05-31 17:28:23.220121 | orchestrator | 17:28:23.219 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-31 17:28:23.228383 | orchestrator | 17:28:23.228 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-31 17:28:23.229781 | orchestrator | 17:28:23.229 STDOUT terraform: local_file.inventory: Creating... 2025-05-31 17:28:23.235121 | orchestrator | 17:28:23.234 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=04f3b2225aac618f70d9843f2ce064cdcbed1433] 2025-05-31 17:28:23.235549 | orchestrator | 17:28:23.235 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=de616f10ed7534f9aabf0516685f1a0fe4178f67] 2025-05-31 17:28:24.393069 | orchestrator | 17:28:24.392 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=ed0a6c16-f341-4f4f-9565-0d20e4d06146] 2025-05-31 17:28:26.856955 | orchestrator | 17:28:26.856 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-31 17:28:26.868398 | orchestrator | 17:28:26.868 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-31 17:28:26.872438 | orchestrator | 17:28:26.872 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-31 17:28:26.872575 | orchestrator | 17:28:26.872 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-31 17:28:26.873601 | orchestrator | 17:28:26.873 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-31 17:28:26.877817 | orchestrator | 17:28:26.877 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-31 17:28:36.858337 | orchestrator | 17:28:36.857 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-31 17:28:36.868392 | orchestrator | 17:28:36.868 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-31 17:28:36.873446 | orchestrator | 17:28:36.873 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-31 17:28:36.873582 | orchestrator | 17:28:36.873 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-31 17:28:36.874589 | orchestrator | 17:28:36.874 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-31 17:28:36.878979 | orchestrator | 17:28:36.878 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-31 17:28:37.188733 | orchestrator | 17:28:37.188 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=ba21ed3a-aa08-4694-bc61-222b58c85b64] 2025-05-31 17:28:37.298492 | orchestrator | 17:28:37.298 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=f85d7ce0-ca39-440e-8326-2ed65947057b] 2025-05-31 17:28:37.492881 | orchestrator | 17:28:37.492 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=40b64994-c63e-4a91-a8db-cab0aabe8778] 2025-05-31 17:28:37.664583 | orchestrator | 17:28:37.664 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=d5e0fc4b-c4b1-469e-96a5-8ec2c20cf699] 2025-05-31 17:28:46.858668 | orchestrator | 17:28:46.858 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-31 17:28:46.868857 | orchestrator | 17:28:46.868 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-31 17:28:47.625692 | orchestrator | 17:28:47.625 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=04d87d5e-eb6f-4a12-8d9a-53aa71200ae0] 2025-05-31 17:28:47.767950 | orchestrator | 17:28:47.767 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=7a0a461b-ebde-483a-93f2-d22930c3195d] 2025-05-31 17:28:47.789200 | orchestrator | 17:28:47.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-31 17:28:47.789924 | orchestrator | 17:28:47.789 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-31 17:28:47.793205 | orchestrator | 17:28:47.793 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3148475213907676934] 2025-05-31 17:28:47.794949 | orchestrator | 17:28:47.794 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-31 17:28:47.796687 | orchestrator | 17:28:47.795 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-31 17:28:47.796721 | orchestrator | 17:28:47.795 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-31 17:28:47.798451 | orchestrator | 17:28:47.798 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-31 17:28:47.813424 | orchestrator | 17:28:47.813 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-31 17:28:47.817459 | orchestrator | 17:28:47.817 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-31 17:28:47.823857 | orchestrator | 17:28:47.823 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-31 17:28:47.827338 | orchestrator | 17:28:47.827 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-31 17:28:47.828329 | orchestrator | 17:28:47.828 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-31 17:28:53.112082 | orchestrator | 17:28:53.111 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=04d87d5e-eb6f-4a12-8d9a-53aa71200ae0/f84cbf5b-ba0c-4d94-b93b-9923118940b1] 2025-05-31 17:28:53.143083 | orchestrator | 17:28:53.142 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=d5e0fc4b-c4b1-469e-96a5-8ec2c20cf699/6d2138a6-e50d-4faf-8e5d-bfea12855062] 2025-05-31 17:28:53.175099 | orchestrator | 17:28:53.174 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=04d87d5e-eb6f-4a12-8d9a-53aa71200ae0/5d12a1f6-9201-4628-be3a-bd91c24a8cbc] 2025-05-31 17:28:53.198910 | orchestrator | 17:28:53.198 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=7a0a461b-ebde-483a-93f2-d22930c3195d/f2aa5dc5-a894-45ad-9ad9-532b3655830f] 2025-05-31 17:28:53.206390 | orchestrator | 17:28:53.205 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=d5e0fc4b-c4b1-469e-96a5-8ec2c20cf699/2e422299-cbcf-4708-b851-70f90cfc06ca] 2025-05-31 17:28:53.220814 | orchestrator | 17:28:53.220 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=04d87d5e-eb6f-4a12-8d9a-53aa71200ae0/a03f9f47-10f6-44ba-9f7a-a1088b199422] 2025-05-31 17:28:53.251244 | orchestrator | 17:28:53.250 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=7a0a461b-ebde-483a-93f2-d22930c3195d/295a1d91-ab6c-4ff1-816e-ba1788a61c02] 2025-05-31 17:28:53.369120 | orchestrator | 17:28:53.368 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=7a0a461b-ebde-483a-93f2-d22930c3195d/ea561065-8c5e-4872-9ad2-33dbafedf722] 2025-05-31 17:28:54.363297 | orchestrator | 17:28:54.362 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=d5e0fc4b-c4b1-469e-96a5-8ec2c20cf699/3e5784f3-90d1-4176-aeed-b0a2fc46b5a2] 2025-05-31 17:28:57.831903 | orchestrator | 17:28:57.831 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-31 17:29:07.832150 | orchestrator | 17:29:07.831 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-31 17:29:08.182068 | orchestrator | 17:29:08.181 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=db3845e0-5cb9-48ec-8b2a-7a4cc761cd3f] 2025-05-31 17:29:08.217281 | orchestrator | 17:29:08.216 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-31 17:29:08.217393 | orchestrator | 17:29:08.217 STDOUT terraform: Outputs: 2025-05-31 17:29:08.217412 | orchestrator | 17:29:08.217 STDOUT terraform: manager_address = 2025-05-31 17:29:08.217475 | orchestrator | 17:29:08.217 STDOUT terraform: private_key = 2025-05-31 17:29:08.340261 | orchestrator | ok: Runtime: 0:01:33.009196 2025-05-31 17:29:08.366565 | 2025-05-31 17:29:08.366705 | TASK [Fetch manager address] 2025-05-31 17:29:08.842345 | orchestrator | ok 2025-05-31 17:29:08.853283 | 2025-05-31 17:29:08.853430 | TASK [Set manager_host address] 2025-05-31 17:29:08.933380 | orchestrator | ok 2025-05-31 17:29:08.942566 | 2025-05-31 17:29:08.942703 | LOOP [Update ansible collections] 2025-05-31 17:29:09.819449 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 17:29:09.819959 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-31 17:29:09.820020 | orchestrator | Starting galaxy collection install process 2025-05-31 17:29:09.820047 | orchestrator | Process install dependency map 2025-05-31 17:29:09.820069 | orchestrator | Starting collection install process 2025-05-31 17:29:09.820090 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-05-31 17:29:09.820143 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-05-31 17:29:09.820170 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-31 17:29:09.820228 | orchestrator | ok: Item: commons Runtime: 0:00:00.572549 2025-05-31 17:29:10.685744 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 17:29:10.686008 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-31 17:29:10.686071 | orchestrator | Starting galaxy collection install process 2025-05-31 17:29:10.686131 | orchestrator | Process install dependency map 2025-05-31 17:29:10.686173 | orchestrator | Starting collection install process 2025-05-31 17:29:10.686209 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-05-31 17:29:10.686245 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-05-31 17:29:10.686279 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-31 17:29:10.686332 | orchestrator | ok: Item: services Runtime: 0:00:00.588427 2025-05-31 17:29:10.710029 | 2025-05-31 17:29:10.710251 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-31 17:29:21.286784 | orchestrator | ok 2025-05-31 17:29:21.297439 | 2025-05-31 17:29:21.297569 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-31 17:30:21.345660 | orchestrator | ok 2025-05-31 17:30:21.356239 | 2025-05-31 17:30:21.356375 | TASK [Fetch manager ssh hostkey] 2025-05-31 17:30:22.941905 | orchestrator | Output suppressed because no_log was given 2025-05-31 17:30:22.955134 | 2025-05-31 17:30:22.955299 | TASK [Get ssh keypair from terraform environment] 2025-05-31 17:30:23.494812 | orchestrator | ok: Runtime: 0:00:00.008379 2025-05-31 17:30:23.509346 | 2025-05-31 17:30:23.509517 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-31 17:30:23.559334 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-31 17:30:23.569476 | 2025-05-31 17:30:23.569612 | TASK [Run manager part 0] 2025-05-31 17:30:24.526390 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 17:30:24.570076 | orchestrator | 2025-05-31 17:30:24.570151 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-31 17:30:24.570170 | orchestrator | 2025-05-31 17:30:24.570244 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-31 17:30:26.409921 | orchestrator | ok: [testbed-manager] 2025-05-31 17:30:26.409962 | orchestrator | 2025-05-31 17:30:26.409982 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-31 17:30:26.409992 | orchestrator | 2025-05-31 17:30:26.410002 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 17:30:28.353338 | orchestrator | ok: [testbed-manager] 2025-05-31 17:30:28.353404 | orchestrator | 2025-05-31 17:30:28.353418 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-31 17:30:29.012871 | orchestrator | ok: [testbed-manager] 2025-05-31 17:30:29.012912 | orchestrator | 2025-05-31 17:30:29.012922 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-31 17:30:29.082191 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:30:29.082230 | orchestrator | 2025-05-31 17:30:29.082241 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-31 17:30:29.118771 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:30:29.118811 | orchestrator | 2025-05-31 17:30:29.118821 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-31 17:30:29.147330 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:30:29.147368 | orchestrator | 2025-05-31 17:30:29.147377 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-31 17:30:29.178843 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:30:29.178900 | orchestrator | 2025-05-31 17:30:29.178916 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-31 17:30:29.221309 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:30:29.221349 | orchestrator | 2025-05-31 17:30:29.221360 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-31 17:30:29.261744 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:30:29.261791 | orchestrator | 2025-05-31 17:30:29.261803 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-31 17:30:29.294337 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:30:29.294374 | orchestrator | 2025-05-31 17:30:29.294384 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-31 17:30:30.099368 | orchestrator | changed: [testbed-manager] 2025-05-31 17:30:30.099438 | orchestrator | 2025-05-31 17:30:30.099455 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-31 17:33:30.238632 | orchestrator | changed: [testbed-manager] 2025-05-31 17:33:30.238780 | orchestrator | 2025-05-31 17:33:30.238800 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-31 17:34:46.334457 | orchestrator | changed: [testbed-manager] 2025-05-31 17:34:46.334638 | orchestrator | 2025-05-31 17:34:46.334658 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-31 17:35:06.794918 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:06.795039 | orchestrator | 2025-05-31 17:35:06.795060 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-31 17:35:15.679314 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:15.679356 | orchestrator | 2025-05-31 17:35:15.679369 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-31 17:35:15.724715 | orchestrator | ok: [testbed-manager] 2025-05-31 17:35:15.724771 | orchestrator | 2025-05-31 17:35:15.724781 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-31 17:35:16.547375 | orchestrator | ok: [testbed-manager] 2025-05-31 17:35:16.547419 | orchestrator | 2025-05-31 17:35:16.547431 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-31 17:35:17.261115 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:17.261154 | orchestrator | 2025-05-31 17:35:17.261164 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-31 17:35:23.742834 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:23.742928 | orchestrator | 2025-05-31 17:35:23.742992 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-31 17:35:29.798851 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:29.798906 | orchestrator | 2025-05-31 17:35:29.798914 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-31 17:35:32.432598 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:32.432679 | orchestrator | 2025-05-31 17:35:32.432697 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-31 17:35:34.155952 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:34.155989 | orchestrator | 2025-05-31 17:35:34.156002 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-31 17:35:35.255068 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-31 17:35:35.255149 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-31 17:35:35.255164 | orchestrator | 2025-05-31 17:35:35.255176 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-31 17:35:35.302125 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-31 17:35:35.302203 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-31 17:35:35.302218 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-31 17:35:35.302231 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-31 17:35:38.443748 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-31 17:35:38.443817 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-31 17:35:38.443827 | orchestrator | 2025-05-31 17:35:38.443836 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-31 17:35:39.024604 | orchestrator | changed: [testbed-manager] 2025-05-31 17:35:39.024697 | orchestrator | 2025-05-31 17:35:39.024714 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-31 17:35:59.190077 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-31 17:35:59.190168 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-31 17:35:59.190185 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-31 17:35:59.190198 | orchestrator | 2025-05-31 17:35:59.190211 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-31 17:36:01.592854 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-31 17:36:01.592941 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-31 17:36:01.592957 | orchestrator | 2025-05-31 17:36:01.592970 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-31 17:36:01.592982 | orchestrator | 2025-05-31 17:36:01.592993 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 17:36:03.023298 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:03.023418 | orchestrator | 2025-05-31 17:36:03.023438 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-31 17:36:03.093076 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:03.093181 | orchestrator | 2025-05-31 17:36:03.093206 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-31 17:36:03.167805 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:03.167881 | orchestrator | 2025-05-31 17:36:03.167896 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-31 17:36:03.924534 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:03.924574 | orchestrator | 2025-05-31 17:36:03.924582 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-31 17:36:04.698558 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:04.698598 | orchestrator | 2025-05-31 17:36:04.698606 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-31 17:36:06.068642 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-31 17:36:06.068729 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-31 17:36:06.068744 | orchestrator | 2025-05-31 17:36:06.068772 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-31 17:36:07.411231 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:07.411398 | orchestrator | 2025-05-31 17:36:07.411419 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-31 17:36:09.183584 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 17:36:09.183629 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-31 17:36:09.183638 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-31 17:36:09.183645 | orchestrator | 2025-05-31 17:36:09.183653 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-31 17:36:09.821291 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:09.821392 | orchestrator | 2025-05-31 17:36:09.821419 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-31 17:36:09.899066 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:09.899104 | orchestrator | 2025-05-31 17:36:09.899111 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-31 17:36:10.775981 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 17:36:10.776019 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:10.776028 | orchestrator | 2025-05-31 17:36:10.776036 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-31 17:36:10.817552 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:10.817593 | orchestrator | 2025-05-31 17:36:10.817604 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-31 17:36:10.852353 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:10.852390 | orchestrator | 2025-05-31 17:36:10.852398 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-31 17:36:10.881738 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:10.881772 | orchestrator | 2025-05-31 17:36:10.881780 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-31 17:36:10.930222 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:10.930543 | orchestrator | 2025-05-31 17:36:10.930573 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-31 17:36:11.674884 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:11.674971 | orchestrator | 2025-05-31 17:36:11.674986 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-31 17:36:11.674999 | orchestrator | 2025-05-31 17:36:11.675012 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 17:36:13.124361 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:13.124460 | orchestrator | 2025-05-31 17:36:13.124477 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-31 17:36:14.243386 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:14.243462 | orchestrator | 2025-05-31 17:36:14.243472 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 17:36:14.243481 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-31 17:36:14.243488 | orchestrator | 2025-05-31 17:36:14.829032 | orchestrator | ok: Runtime: 0:05:50.480086 2025-05-31 17:36:14.848950 | 2025-05-31 17:36:14.849250 | TASK [Point out that the log in on the manager is now possible] 2025-05-31 17:36:14.889274 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-31 17:36:14.898936 | 2025-05-31 17:36:14.899070 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-31 17:36:14.933256 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-31 17:36:14.941820 | 2025-05-31 17:36:14.941991 | TASK [Run manager part 1 + 2] 2025-05-31 17:36:15.777708 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 17:36:15.831101 | orchestrator | 2025-05-31 17:36:15.831151 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-31 17:36:15.831158 | orchestrator | 2025-05-31 17:36:15.831171 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 17:36:18.538304 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:18.538371 | orchestrator | 2025-05-31 17:36:18.538388 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-31 17:36:18.579797 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:18.579853 | orchestrator | 2025-05-31 17:36:18.579866 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-31 17:36:18.624861 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:18.624913 | orchestrator | 2025-05-31 17:36:18.624923 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-31 17:36:18.671975 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:18.672028 | orchestrator | 2025-05-31 17:36:18.672038 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-31 17:36:18.750819 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:18.750877 | orchestrator | 2025-05-31 17:36:18.750889 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-31 17:36:18.811687 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:18.811742 | orchestrator | 2025-05-31 17:36:18.811752 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-31 17:36:18.860290 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-31 17:36:18.860364 | orchestrator | 2025-05-31 17:36:18.860374 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-31 17:36:19.656975 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:19.657061 | orchestrator | 2025-05-31 17:36:19.657077 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-31 17:36:19.707571 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:19.707622 | orchestrator | 2025-05-31 17:36:19.707629 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-31 17:36:21.161184 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:21.161383 | orchestrator | 2025-05-31 17:36:21.161402 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-31 17:36:21.745808 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:21.745897 | orchestrator | 2025-05-31 17:36:21.745910 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-31 17:36:22.912497 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:22.912583 | orchestrator | 2025-05-31 17:36:22.912602 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-31 17:36:36.030141 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:36.030257 | orchestrator | 2025-05-31 17:36:36.030274 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-31 17:36:36.689890 | orchestrator | ok: [testbed-manager] 2025-05-31 17:36:36.690005 | orchestrator | 2025-05-31 17:36:36.690049 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-31 17:36:36.740489 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:36.740590 | orchestrator | 2025-05-31 17:36:36.740604 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-31 17:36:37.673389 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:37.673498 | orchestrator | 2025-05-31 17:36:37.673515 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-31 17:36:38.603521 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:38.603631 | orchestrator | 2025-05-31 17:36:38.603648 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-31 17:36:39.153059 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:39.153170 | orchestrator | 2025-05-31 17:36:39.153187 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-31 17:36:39.196258 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-31 17:36:39.196371 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-31 17:36:39.196388 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-31 17:36:39.196401 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-31 17:36:41.941318 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:41.941467 | orchestrator | 2025-05-31 17:36:41.941487 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-31 17:36:51.150969 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-31 17:36:51.151110 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-31 17:36:51.151129 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-31 17:36:51.151142 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-31 17:36:51.151167 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-31 17:36:51.151179 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-31 17:36:51.151191 | orchestrator | 2025-05-31 17:36:51.151203 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-31 17:36:52.222995 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:52.223116 | orchestrator | 2025-05-31 17:36:52.223133 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-31 17:36:52.269933 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:52.270039 | orchestrator | 2025-05-31 17:36:52.270050 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-31 17:36:55.445033 | orchestrator | changed: [testbed-manager] 2025-05-31 17:36:55.445155 | orchestrator | 2025-05-31 17:36:55.445173 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-31 17:36:55.487877 | orchestrator | skipping: [testbed-manager] 2025-05-31 17:36:55.487979 | orchestrator | 2025-05-31 17:36:55.487993 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-31 17:38:41.227115 | orchestrator | changed: [testbed-manager] 2025-05-31 17:38:41.227204 | orchestrator | 2025-05-31 17:38:41.227217 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-31 17:38:42.405970 | orchestrator | ok: [testbed-manager] 2025-05-31 17:38:42.406213 | orchestrator | 2025-05-31 17:38:42.406234 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 17:38:42.406250 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-31 17:38:42.406262 | orchestrator | 2025-05-31 17:38:42.579166 | orchestrator | ok: Runtime: 0:02:27.260534 2025-05-31 17:38:42.588707 | 2025-05-31 17:38:42.588816 | TASK [Reboot manager] 2025-05-31 17:38:44.123859 | orchestrator | ok: Runtime: 0:00:00.912918 2025-05-31 17:38:44.139202 | 2025-05-31 17:38:44.139370 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-31 17:39:00.598106 | orchestrator | ok 2025-05-31 17:39:00.608512 | 2025-05-31 17:39:00.608644 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-31 17:40:00.654781 | orchestrator | ok 2025-05-31 17:40:00.662338 | 2025-05-31 17:40:00.662462 | TASK [Deploy manager + bootstrap nodes] 2025-05-31 17:40:03.546197 | orchestrator | 2025-05-31 17:40:03.546431 | orchestrator | # DEPLOY MANAGER 2025-05-31 17:40:03.546457 | orchestrator | 2025-05-31 17:40:03.546472 | orchestrator | + set -e 2025-05-31 17:40:03.546485 | orchestrator | + echo 2025-05-31 17:40:03.546498 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-31 17:40:03.546516 | orchestrator | + echo 2025-05-31 17:40:03.546566 | orchestrator | + cat /opt/manager-vars.sh 2025-05-31 17:40:03.550078 | orchestrator | export NUMBER_OF_NODES=6 2025-05-31 17:40:03.550160 | orchestrator | 2025-05-31 17:40:03.550176 | orchestrator | export CEPH_VERSION=reef 2025-05-31 17:40:03.550189 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-31 17:40:03.550201 | orchestrator | export MANAGER_VERSION=9.1.0 2025-05-31 17:40:03.550225 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-31 17:40:03.550235 | orchestrator | 2025-05-31 17:40:03.550254 | orchestrator | export ARA=false 2025-05-31 17:40:03.550266 | orchestrator | export TEMPEST=false 2025-05-31 17:40:03.550283 | orchestrator | export IS_ZUUL=true 2025-05-31 17:40:03.550294 | orchestrator | 2025-05-31 17:40:03.550312 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-05-31 17:40:03.550324 | orchestrator | export EXTERNAL_API=false 2025-05-31 17:40:03.550334 | orchestrator | 2025-05-31 17:40:03.550356 | orchestrator | export IMAGE_USER=ubuntu 2025-05-31 17:40:03.550367 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-31 17:40:03.550377 | orchestrator | 2025-05-31 17:40:03.550392 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-31 17:40:03.550413 | orchestrator | 2025-05-31 17:40:03.550424 | orchestrator | + echo 2025-05-31 17:40:03.550435 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 17:40:03.551477 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 17:40:03.551555 | orchestrator | ++ INTERACTIVE=false 2025-05-31 17:40:03.551570 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 17:40:03.551581 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 17:40:03.551595 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 17:40:03.551606 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 17:40:03.551616 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 17:40:03.551627 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 17:40:03.551638 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 17:40:03.551657 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 17:40:03.551669 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 17:40:03.551679 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-05-31 17:40:03.551691 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-05-31 17:40:03.551701 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 17:40:03.551712 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 17:40:03.551723 | orchestrator | ++ export ARA=false 2025-05-31 17:40:03.551733 | orchestrator | ++ ARA=false 2025-05-31 17:40:03.551755 | orchestrator | ++ export TEMPEST=false 2025-05-31 17:40:03.551766 | orchestrator | ++ TEMPEST=false 2025-05-31 17:40:03.551776 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 17:40:03.551787 | orchestrator | ++ IS_ZUUL=true 2025-05-31 17:40:03.551798 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-05-31 17:40:03.551809 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-05-31 17:40:03.551820 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 17:40:03.551830 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 17:40:03.551841 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 17:40:03.551852 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 17:40:03.551862 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 17:40:03.551873 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 17:40:03.551884 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 17:40:03.551894 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 17:40:03.551906 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-31 17:40:03.614420 | orchestrator | + docker version 2025-05-31 17:40:03.938857 | orchestrator | Client: Docker Engine - Community 2025-05-31 17:40:03.939036 | orchestrator | Version: 27.5.1 2025-05-31 17:40:03.939060 | orchestrator | API version: 1.47 2025-05-31 17:40:03.939071 | orchestrator | Go version: go1.22.11 2025-05-31 17:40:03.939081 | orchestrator | Git commit: 9f9e405 2025-05-31 17:40:03.939091 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-31 17:40:03.939102 | orchestrator | OS/Arch: linux/amd64 2025-05-31 17:40:03.939113 | orchestrator | Context: default 2025-05-31 17:40:03.939123 | orchestrator | 2025-05-31 17:40:03.939133 | orchestrator | Server: Docker Engine - Community 2025-05-31 17:40:03.939143 | orchestrator | Engine: 2025-05-31 17:40:03.939153 | orchestrator | Version: 27.5.1 2025-05-31 17:40:03.939163 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-31 17:40:03.939172 | orchestrator | Go version: go1.22.11 2025-05-31 17:40:03.939182 | orchestrator | Git commit: 4c9b3b0 2025-05-31 17:40:03.939216 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-31 17:40:03.939227 | orchestrator | OS/Arch: linux/amd64 2025-05-31 17:40:03.939237 | orchestrator | Experimental: false 2025-05-31 17:40:03.939247 | orchestrator | containerd: 2025-05-31 17:40:03.939256 | orchestrator | Version: 1.7.27 2025-05-31 17:40:03.939266 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-31 17:40:03.939276 | orchestrator | runc: 2025-05-31 17:40:03.939286 | orchestrator | Version: 1.2.5 2025-05-31 17:40:03.939310 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-31 17:40:03.939320 | orchestrator | docker-init: 2025-05-31 17:40:03.939329 | orchestrator | Version: 0.19.0 2025-05-31 17:40:03.939339 | orchestrator | GitCommit: de40ad0 2025-05-31 17:40:03.943598 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-31 17:40:03.953489 | orchestrator | + set -e 2025-05-31 17:40:03.953577 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 17:40:03.953589 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 17:40:03.953599 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 17:40:03.953608 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 17:40:03.953618 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 17:40:03.953627 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 17:40:03.953640 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 17:40:03.953650 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-05-31 17:40:03.953660 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-05-31 17:40:03.953709 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 17:40:03.953725 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 17:40:03.953741 | orchestrator | ++ export ARA=false 2025-05-31 17:40:03.953757 | orchestrator | ++ ARA=false 2025-05-31 17:40:03.953773 | orchestrator | ++ export TEMPEST=false 2025-05-31 17:40:03.953790 | orchestrator | ++ TEMPEST=false 2025-05-31 17:40:03.953808 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 17:40:03.953823 | orchestrator | ++ IS_ZUUL=true 2025-05-31 17:40:03.953841 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-05-31 17:40:03.953858 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-05-31 17:40:03.953876 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 17:40:03.953888 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 17:40:03.953909 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 17:40:03.953919 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 17:40:03.953928 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 17:40:03.953938 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 17:40:03.953970 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 17:40:03.953980 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 17:40:03.953990 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 17:40:03.953999 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 17:40:03.954009 | orchestrator | ++ INTERACTIVE=false 2025-05-31 17:40:03.954066 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 17:40:03.954078 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 17:40:03.954087 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-05-31 17:40:03.954097 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-05-31 17:40:03.961060 | orchestrator | + set -e 2025-05-31 17:40:03.961127 | orchestrator | + VERSION=9.1.0 2025-05-31 17:40:03.961147 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-31 17:40:03.967780 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-05-31 17:40:03.967838 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-31 17:40:03.972551 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-31 17:40:03.975567 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-31 17:40:03.984545 | orchestrator | /opt/configuration ~ 2025-05-31 17:40:03.984635 | orchestrator | + set -e 2025-05-31 17:40:03.984648 | orchestrator | + pushd /opt/configuration 2025-05-31 17:40:03.984660 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 17:40:03.987653 | orchestrator | + source /opt/venv/bin/activate 2025-05-31 17:40:03.988928 | orchestrator | ++ deactivate nondestructive 2025-05-31 17:40:03.989052 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:03.989067 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:03.989078 | orchestrator | ++ hash -r 2025-05-31 17:40:03.989089 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:03.989098 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-31 17:40:03.989108 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-31 17:40:03.989119 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-31 17:40:03.989156 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-31 17:40:03.989172 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-31 17:40:03.989193 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-31 17:40:03.989210 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-31 17:40:03.989229 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 17:40:03.989373 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 17:40:03.989388 | orchestrator | ++ export PATH 2025-05-31 17:40:03.989398 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:03.989408 | orchestrator | ++ '[' -z '' ']' 2025-05-31 17:40:03.989417 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-31 17:40:03.989427 | orchestrator | ++ PS1='(venv) ' 2025-05-31 17:40:03.989437 | orchestrator | ++ export PS1 2025-05-31 17:40:03.989447 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-31 17:40:03.989456 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-31 17:40:03.989465 | orchestrator | ++ hash -r 2025-05-31 17:40:03.989498 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-31 17:40:05.421059 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-31 17:40:05.422481 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-31 17:40:05.424070 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-31 17:40:05.425656 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-31 17:40:05.427038 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-31 17:40:05.438239 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-05-31 17:40:05.439461 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-31 17:40:05.440503 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-31 17:40:05.441958 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-31 17:40:05.494482 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-31 17:40:05.496428 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-31 17:40:05.498124 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-31 17:40:05.499623 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-31 17:40:05.504332 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-31 17:40:05.789382 | orchestrator | ++ which gilt 2025-05-31 17:40:05.794817 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-31 17:40:05.794917 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-31 17:40:06.091414 | orchestrator | osism.cfg-generics: 2025-05-31 17:40:06.091504 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-31 17:40:07.778270 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-31 17:40:07.778419 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-31 17:40:07.779093 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-31 17:40:07.779169 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-31 17:40:08.847415 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-31 17:40:08.863377 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-31 17:40:09.415574 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-31 17:40:09.477821 | orchestrator | ~ 2025-05-31 17:40:09.477926 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 17:40:09.478088 | orchestrator | + deactivate 2025-05-31 17:40:09.478108 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-31 17:40:09.478121 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 17:40:09.478132 | orchestrator | + export PATH 2025-05-31 17:40:09.478143 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-31 17:40:09.478154 | orchestrator | + '[' -n '' ']' 2025-05-31 17:40:09.478164 | orchestrator | + hash -r 2025-05-31 17:40:09.478175 | orchestrator | + '[' -n '' ']' 2025-05-31 17:40:09.478185 | orchestrator | + unset VIRTUAL_ENV 2025-05-31 17:40:09.478196 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-31 17:40:09.478206 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-31 17:40:09.478217 | orchestrator | + unset -f deactivate 2025-05-31 17:40:09.478228 | orchestrator | + popd 2025-05-31 17:40:09.479640 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-05-31 17:40:09.479667 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-31 17:40:09.480135 | orchestrator | ++ semver 9.1.0 7.0.0 2025-05-31 17:40:09.526859 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-31 17:40:09.527018 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-31 17:40:09.527048 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-31 17:40:09.567067 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 17:40:09.567176 | orchestrator | + source /opt/venv/bin/activate 2025-05-31 17:40:09.567227 | orchestrator | ++ deactivate nondestructive 2025-05-31 17:40:09.567242 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:09.567252 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:09.567263 | orchestrator | ++ hash -r 2025-05-31 17:40:09.567274 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:09.567285 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-31 17:40:09.567296 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-31 17:40:09.567307 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-31 17:40:09.567345 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-31 17:40:09.567358 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-31 17:40:09.567369 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-31 17:40:09.567379 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-31 17:40:09.567391 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 17:40:09.567403 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 17:40:09.567428 | orchestrator | ++ export PATH 2025-05-31 17:40:09.567444 | orchestrator | ++ '[' -n '' ']' 2025-05-31 17:40:09.567623 | orchestrator | ++ '[' -z '' ']' 2025-05-31 17:40:09.567645 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-31 17:40:09.567657 | orchestrator | ++ PS1='(venv) ' 2025-05-31 17:40:09.567668 | orchestrator | ++ export PS1 2025-05-31 17:40:09.567679 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-31 17:40:09.567690 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-31 17:40:09.567705 | orchestrator | ++ hash -r 2025-05-31 17:40:09.567871 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-31 17:40:11.110192 | orchestrator | 2025-05-31 17:40:11.110311 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-31 17:40:11.110326 | orchestrator | 2025-05-31 17:40:11.110336 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-31 17:40:11.818233 | orchestrator | ok: [testbed-manager] 2025-05-31 17:40:11.818307 | orchestrator | 2025-05-31 17:40:11.818314 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-31 17:40:12.963733 | orchestrator | changed: [testbed-manager] 2025-05-31 17:40:12.963873 | orchestrator | 2025-05-31 17:40:12.963902 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-31 17:40:12.963922 | orchestrator | 2025-05-31 17:40:12.963989 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 17:40:15.706606 | orchestrator | ok: [testbed-manager] 2025-05-31 17:40:15.706698 | orchestrator | 2025-05-31 17:40:15.706707 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-31 17:40:21.570248 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-31 17:40:21.570354 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-31 17:40:21.570365 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:9.1.0) 2025-05-31 17:40:21.570372 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:0.20250530.0) 2025-05-31 17:40:21.570379 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:9.1.0) 2025-05-31 17:40:21.570389 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.4-alpine) 2025-05-31 17:40:21.570397 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-31 17:40:21.570405 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:0.20250530.0) 2025-05-31 17:40:21.570411 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20250530.0) 2025-05-31 17:40:21.570417 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-31 17:40:21.570424 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.1) 2025-05-31 17:40:21.570430 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.5) 2025-05-31 17:40:21.570437 | orchestrator | 2025-05-31 17:40:21.570443 | orchestrator | TASK [Check status] ************************************************************ 2025-05-31 17:41:28.336163 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-31 17:41:28.336363 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-31 17:41:28.336383 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-31 17:41:28.336395 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-31 17:41:28.336421 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j631704648660.1648', 'results_file': '/home/dragon/.ansible_async/j631704648660.1648', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336440 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j111782892046.1673', 'results_file': '/home/dragon/.ansible_async/j111782892046.1673', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336466 | orchestrator | failed: [testbed-manager] (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j237404973388.1698', 'results_file': '/home/dragon/.ansible_async/j237404973388.1698', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:9.1.0', 'ansible_loop_var': 'item'}) => {"ansible_job_id": "j237404973388.1698", "ansible_loop_var": "async_result_item", "async_result_item": {"ansible_job_id": "j237404973388.1698", "ansible_loop_var": "item", "changed": true, "failed": 0, "finished": 0, "item": "registry.osism.tech/osism/ceph-ansible:9.1.0", "results_file": "/home/dragon/.ansible_async/j237404973388.1698", "started": 1}, "attempts": 1, "changed": false, "finished": 1, "msg": "Error pulling image registry.osism.tech/osism/ceph-ansible:9.1.0 - 500 Server Error for http+docker://localhost/v1.47/images/create?tag=9.1.0&fromImage=registry.osism.tech%2Fosism%2Fceph-ansible: Internal Server Error (\"unknown: artifact osism/ceph-ansible:9.1.0 not found\")", "results_file": "/home/dragon/.ansible_async/j237404973388.1698", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} 2025-05-31 17:41:28.336482 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j840578241790.1730', 'results_file': '/home/dragon/.ansible_async/j840578241790.1730', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:0.20250530.0', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336518 | orchestrator | failed: [testbed-manager] (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j390360224354.1769', 'results_file': '/home/dragon/.ansible_async/j390360224354.1769', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:9.1.0', 'ansible_loop_var': 'item'}) => {"ansible_job_id": "j390360224354.1769", "ansible_loop_var": "async_result_item", "async_result_item": {"ansible_job_id": "j390360224354.1769", "ansible_loop_var": "item", "changed": true, "failed": 0, "finished": 0, "item": "registry.osism.tech/osism/kolla-ansible:9.1.0", "results_file": "/home/dragon/.ansible_async/j390360224354.1769", "started": 1}, "attempts": 1, "changed": false, "finished": 1, "msg": "Error pulling image registry.osism.tech/osism/kolla-ansible:9.1.0 - 500 Server Error for http+docker://localhost/v1.47/images/create?tag=9.1.0&fromImage=registry.osism.tech%2Fosism%2Fkolla-ansible: Internal Server Error (\"unknown: artifact osism/kolla-ansible:9.1.0 not found\")", "results_file": "/home/dragon/.ansible_async/j390360224354.1769", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} 2025-05-31 17:41:28.336531 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j956824094550.1801', 'results_file': '/home/dragon/.ansible_async/j956824094550.1801', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.4-alpine', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336543 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-31 17:41:28.336553 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-31 17:41:28.336585 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j748600270805.1828', 'results_file': '/home/dragon/.ansible_async/j748600270805.1828', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336598 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j146242047703.1861', 'results_file': '/home/dragon/.ansible_async/j146242047703.1861', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:0.20250530.0', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336610 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j78871629170.1893', 'results_file': '/home/dragon/.ansible_async/j78871629170.1893', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20250530.0', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336622 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j848837139796.1924', 'results_file': '/home/dragon/.ansible_async/j848837139796.1924', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336635 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j385494459709.1959', 'results_file': '/home/dragon/.ansible_async/j385494459709.1959', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.1', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336648 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j789702251561.1991', 'results_file': '/home/dragon/.ansible_async/j789702251561.1991', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.5', 'ansible_loop_var': 'item'}) 2025-05-31 17:41:28.336690 | orchestrator | 2025-05-31 17:41:28.336705 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 17:41:28.336717 | orchestrator | testbed-manager : ok=4 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 2025-05-31 17:41:28.336729 | orchestrator | 2025-05-31 17:41:28.768955 | orchestrator | ERROR 2025-05-31 17:41:28.769499 | orchestrator | { 2025-05-31 17:41:28.769627 | orchestrator | "delta": "0:01:27.134360", 2025-05-31 17:41:28.769702 | orchestrator | "end": "2025-05-31 17:41:28.501976", 2025-05-31 17:41:28.769778 | orchestrator | "msg": "non-zero return code", 2025-05-31 17:41:28.769835 | orchestrator | "rc": 2, 2025-05-31 17:41:28.769889 | orchestrator | "start": "2025-05-31 17:40:01.367616" 2025-05-31 17:41:28.769957 | orchestrator | } failure 2025-05-31 17:41:28.779993 | 2025-05-31 17:41:28.780178 | PLAY RECAP 2025-05-31 17:41:28.780299 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-05-31 17:41:28.780351 | 2025-05-31 17:41:28.953177 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-31 17:41:28.956748 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-31 17:41:29.775588 | 2025-05-31 17:41:29.775803 | PLAY [Post output play] 2025-05-31 17:41:29.793164 | 2025-05-31 17:41:29.793321 | LOOP [stage-output : Register sources] 2025-05-31 17:41:29.865793 | 2025-05-31 17:41:29.866174 | TASK [stage-output : Check sudo] 2025-05-31 17:41:30.778558 | orchestrator | sudo: a password is required 2025-05-31 17:41:30.902957 | orchestrator | ok: Runtime: 0:00:00.015997 2025-05-31 17:41:30.918550 | 2025-05-31 17:41:30.918721 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-31 17:41:30.953736 | 2025-05-31 17:41:30.954010 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-31 17:41:31.025262 | orchestrator | ok 2025-05-31 17:41:31.035885 | 2025-05-31 17:41:31.036065 | LOOP [stage-output : Ensure target folders exist] 2025-05-31 17:41:31.534298 | orchestrator | ok: "docs" 2025-05-31 17:41:31.534982 | 2025-05-31 17:41:31.791707 | orchestrator | ok: "artifacts" 2025-05-31 17:41:32.057672 | orchestrator | ok: "logs" 2025-05-31 17:41:32.077058 | 2025-05-31 17:41:32.077332 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-31 17:41:32.112626 | 2025-05-31 17:41:32.112925 | TASK [stage-output : Make all log files readable] 2025-05-31 17:41:32.415558 | orchestrator | ok 2025-05-31 17:41:32.425066 | 2025-05-31 17:41:32.425247 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-31 17:41:32.460920 | orchestrator | skipping: Conditional result was False 2025-05-31 17:41:32.479557 | 2025-05-31 17:41:32.479750 | TASK [stage-output : Discover log files for compression] 2025-05-31 17:41:32.506881 | orchestrator | skipping: Conditional result was False 2025-05-31 17:41:32.519042 | 2025-05-31 17:41:32.519250 | LOOP [stage-output : Archive everything from logs] 2025-05-31 17:41:32.565142 | 2025-05-31 17:41:32.565389 | PLAY [Post cleanup play] 2025-05-31 17:41:32.573930 | 2025-05-31 17:41:32.574041 | TASK [Set cloud fact (Zuul deployment)] 2025-05-31 17:41:32.641895 | orchestrator | ok 2025-05-31 17:41:32.653664 | 2025-05-31 17:41:32.653785 | TASK [Set cloud fact (local deployment)] 2025-05-31 17:41:32.689148 | orchestrator | skipping: Conditional result was False 2025-05-31 17:41:32.704318 | 2025-05-31 17:41:32.704463 | TASK [Clean the cloud environment] 2025-05-31 17:41:33.316108 | orchestrator | 2025-05-31 17:41:33 - clean up servers 2025-05-31 17:41:34.057420 | orchestrator | 2025-05-31 17:41:34 - testbed-manager 2025-05-31 17:41:34.150438 | orchestrator | 2025-05-31 17:41:34 - testbed-node-4 2025-05-31 17:41:34.243865 | orchestrator | 2025-05-31 17:41:34 - testbed-node-1 2025-05-31 17:41:34.405698 | orchestrator | 2025-05-31 17:41:34 - testbed-node-5 2025-05-31 17:41:34.504327 | orchestrator | 2025-05-31 17:41:34 - testbed-node-0 2025-05-31 17:41:34.607445 | orchestrator | 2025-05-31 17:41:34 - testbed-node-3 2025-05-31 17:41:34.729152 | orchestrator | 2025-05-31 17:41:34 - testbed-node-2 2025-05-31 17:41:34.826274 | orchestrator | 2025-05-31 17:41:34 - clean up keypairs 2025-05-31 17:41:34.844809 | orchestrator | 2025-05-31 17:41:34 - testbed 2025-05-31 17:41:34.870686 | orchestrator | 2025-05-31 17:41:34 - wait for servers to be gone 2025-05-31 17:41:43.680990 | orchestrator | 2025-05-31 17:41:43 - clean up ports 2025-05-31 17:41:43.882582 | orchestrator | 2025-05-31 17:41:43 - 1364e540-5aee-4b78-8b36-c318c357802a 2025-05-31 17:41:44.156170 | orchestrator | 2025-05-31 17:41:44 - 570bea25-8fb0-41ea-a727-c1bded8036d7 2025-05-31 17:41:44.592108 | orchestrator | 2025-05-31 17:41:44 - 6ad49353-2c1b-48f9-ae14-3c65b111d3a8 2025-05-31 17:41:44.851574 | orchestrator | 2025-05-31 17:41:44 - 6b0096b1-8290-42f7-967a-0bc4614057a2 2025-05-31 17:41:45.099191 | orchestrator | 2025-05-31 17:41:45 - 738d7ae6-1882-48e0-8ea1-adfa1b8ee13a 2025-05-31 17:41:45.298187 | orchestrator | 2025-05-31 17:41:45 - 7dc907e5-d640-4730-a678-df3e639e08bd 2025-05-31 17:41:45.503588 | orchestrator | 2025-05-31 17:41:45 - 83eb919c-91e8-4907-b8be-39d0628d6491 2025-05-31 17:41:45.714276 | orchestrator | 2025-05-31 17:41:45 - clean up volumes 2025-05-31 17:41:45.833733 | orchestrator | 2025-05-31 17:41:45 - testbed-volume-3-node-base 2025-05-31 17:41:45.873787 | orchestrator | 2025-05-31 17:41:45 - testbed-volume-0-node-base 2025-05-31 17:41:45.917564 | orchestrator | 2025-05-31 17:41:45 - testbed-volume-4-node-base 2025-05-31 17:41:45.956261 | orchestrator | 2025-05-31 17:41:45 - testbed-volume-1-node-base 2025-05-31 17:41:45.997519 | orchestrator | 2025-05-31 17:41:45 - testbed-volume-2-node-base 2025-05-31 17:41:46.043160 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-5-node-base 2025-05-31 17:41:46.086267 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-manager-base 2025-05-31 17:41:46.134433 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-4-node-4 2025-05-31 17:41:46.176451 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-5-node-5 2025-05-31 17:41:46.215135 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-8-node-5 2025-05-31 17:41:46.255866 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-1-node-4 2025-05-31 17:41:46.306978 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-0-node-3 2025-05-31 17:41:46.350588 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-7-node-4 2025-05-31 17:41:46.393175 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-3-node-3 2025-05-31 17:41:46.440167 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-6-node-3 2025-05-31 17:41:46.483100 | orchestrator | 2025-05-31 17:41:46 - testbed-volume-2-node-5 2025-05-31 17:41:46.528091 | orchestrator | 2025-05-31 17:41:46 - disconnect routers 2025-05-31 17:41:46.603978 | orchestrator | 2025-05-31 17:41:46 - testbed 2025-05-31 17:41:47.538138 | orchestrator | 2025-05-31 17:41:47 - clean up subnets 2025-05-31 17:41:47.593805 | orchestrator | 2025-05-31 17:41:47 - subnet-testbed-management 2025-05-31 17:41:47.807543 | orchestrator | 2025-05-31 17:41:47 - clean up networks 2025-05-31 17:41:47.965702 | orchestrator | 2025-05-31 17:41:47 - net-testbed-management 2025-05-31 17:41:48.253601 | orchestrator | 2025-05-31 17:41:48 - clean up security groups 2025-05-31 17:41:48.299281 | orchestrator | 2025-05-31 17:41:48 - testbed-management 2025-05-31 17:41:48.404307 | orchestrator | 2025-05-31 17:41:48 - testbed-node 2025-05-31 17:41:48.528137 | orchestrator | 2025-05-31 17:41:48 - clean up floating ips 2025-05-31 17:41:48.570342 | orchestrator | 2025-05-31 17:41:48 - 81.163.193.109 2025-05-31 17:41:48.897681 | orchestrator | 2025-05-31 17:41:48 - clean up routers 2025-05-31 17:41:49.024643 | orchestrator | 2025-05-31 17:41:49 - testbed 2025-05-31 17:41:50.266323 | orchestrator | ok: Runtime: 0:00:16.829838 2025-05-31 17:41:50.268854 | 2025-05-31 17:41:50.268952 | PLAY RECAP 2025-05-31 17:41:50.269105 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-31 17:41:50.269287 | 2025-05-31 17:41:50.469854 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-31 17:41:50.471225 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-31 17:41:51.208177 | 2025-05-31 17:41:51.208408 | PLAY [Cleanup play] 2025-05-31 17:41:51.227312 | 2025-05-31 17:41:51.227470 | TASK [Set cloud fact (Zuul deployment)] 2025-05-31 17:41:51.278373 | orchestrator | ok 2025-05-31 17:41:51.285708 | 2025-05-31 17:41:51.285842 | TASK [Set cloud fact (local deployment)] 2025-05-31 17:41:51.330122 | orchestrator | skipping: Conditional result was False 2025-05-31 17:41:51.338458 | 2025-05-31 17:41:51.338576 | TASK [Clean the cloud environment] 2025-05-31 17:41:52.539027 | orchestrator | 2025-05-31 17:41:52 - clean up servers 2025-05-31 17:41:53.017145 | orchestrator | 2025-05-31 17:41:53 - clean up keypairs 2025-05-31 17:41:53.037438 | orchestrator | 2025-05-31 17:41:53 - wait for servers to be gone 2025-05-31 17:41:53.085225 | orchestrator | 2025-05-31 17:41:53 - clean up ports 2025-05-31 17:41:53.164244 | orchestrator | 2025-05-31 17:41:53 - clean up volumes 2025-05-31 17:41:53.233654 | orchestrator | 2025-05-31 17:41:53 - disconnect routers 2025-05-31 17:41:53.265680 | orchestrator | 2025-05-31 17:41:53 - clean up subnets 2025-05-31 17:41:53.284845 | orchestrator | 2025-05-31 17:41:53 - clean up networks 2025-05-31 17:41:53.437028 | orchestrator | 2025-05-31 17:41:53 - clean up security groups 2025-05-31 17:41:53.476172 | orchestrator | 2025-05-31 17:41:53 - clean up floating ips 2025-05-31 17:41:53.500121 | orchestrator | 2025-05-31 17:41:53 - clean up routers 2025-05-31 17:41:53.877000 | orchestrator | ok: Runtime: 0:00:01.375457 2025-05-31 17:41:53.879528 | 2025-05-31 17:41:53.879642 | PLAY RECAP 2025-05-31 17:41:53.879712 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-31 17:41:53.879759 | 2025-05-31 17:41:54.016800 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-31 17:41:54.017879 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-31 17:41:54.781051 | 2025-05-31 17:41:54.781281 | PLAY [Base post-fetch] 2025-05-31 17:41:54.798606 | 2025-05-31 17:41:54.798799 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-31 17:41:54.855431 | orchestrator | skipping: Conditional result was False 2025-05-31 17:41:54.871435 | 2025-05-31 17:41:54.871653 | TASK [fetch-output : Set log path for single node] 2025-05-31 17:41:54.934924 | orchestrator | ok 2025-05-31 17:41:54.944394 | 2025-05-31 17:41:54.944557 | LOOP [fetch-output : Ensure local output dirs] 2025-05-31 17:41:55.446723 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/work/logs" 2025-05-31 17:41:55.753344 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/work/artifacts" 2025-05-31 17:41:56.044191 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3efcbb5c3ed64942a56323660533a892/work/docs" 2025-05-31 17:41:56.068239 | 2025-05-31 17:41:56.068694 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-31 17:41:57.101621 | orchestrator | changed: .d..t...... ./ 2025-05-31 17:41:57.102106 | orchestrator | changed: All items complete 2025-05-31 17:41:57.102216 | 2025-05-31 17:41:57.814334 | orchestrator | changed: .d..t...... ./ 2025-05-31 17:41:58.574553 | orchestrator | changed: .d..t...... ./ 2025-05-31 17:41:58.593446 | 2025-05-31 17:41:58.593596 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-31 17:41:58.629625 | orchestrator | skipping: Conditional result was False 2025-05-31 17:41:58.633444 | orchestrator | skipping: Conditional result was False 2025-05-31 17:41:58.648772 | 2025-05-31 17:41:58.648851 | PLAY RECAP 2025-05-31 17:41:58.648904 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-31 17:41:58.648930 | 2025-05-31 17:41:58.801654 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-31 17:41:58.802733 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-31 17:41:59.573080 | 2025-05-31 17:41:59.573281 | PLAY [Base post] 2025-05-31 17:41:59.588307 | 2025-05-31 17:41:59.588455 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-31 17:42:00.596775 | orchestrator | changed 2025-05-31 17:42:00.608166 | 2025-05-31 17:42:00.608340 | PLAY RECAP 2025-05-31 17:42:00.608444 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-31 17:42:00.608541 | 2025-05-31 17:42:00.764529 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-31 17:42:00.765678 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-31 17:42:01.602798 | 2025-05-31 17:42:01.603042 | PLAY [Base post-logs] 2025-05-31 17:42:01.614478 | 2025-05-31 17:42:01.614620 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-31 17:42:02.117853 | localhost | changed 2025-05-31 17:42:02.137044 | 2025-05-31 17:42:02.137389 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-31 17:42:02.171778 | localhost | ok 2025-05-31 17:42:02.178576 | 2025-05-31 17:42:02.178775 | TASK [Set zuul-log-path fact] 2025-05-31 17:42:02.198182 | localhost | ok 2025-05-31 17:42:02.210036 | 2025-05-31 17:42:02.210213 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-31 17:42:02.240969 | localhost | ok 2025-05-31 17:42:02.247798 | 2025-05-31 17:42:02.247977 | TASK [upload-logs : Create log directories] 2025-05-31 17:42:02.804033 | localhost | changed 2025-05-31 17:42:02.809033 | 2025-05-31 17:42:02.809221 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-31 17:42:03.365821 | localhost -> localhost | ok: Runtime: 0:00:00.009854 2025-05-31 17:42:03.375002 | 2025-05-31 17:42:03.375210 | TASK [upload-logs : Upload logs to log server] 2025-05-31 17:42:03.968277 | localhost | Output suppressed because no_log was given 2025-05-31 17:42:03.972028 | 2025-05-31 17:42:03.972304 | LOOP [upload-logs : Compress console log and json output] 2025-05-31 17:42:04.033457 | localhost | skipping: Conditional result was False 2025-05-31 17:42:04.038234 | localhost | skipping: Conditional result was False 2025-05-31 17:42:04.046589 | 2025-05-31 17:42:04.046829 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-31 17:42:04.097369 | localhost | skipping: Conditional result was False 2025-05-31 17:42:04.097969 | 2025-05-31 17:42:04.101233 | localhost | skipping: Conditional result was False 2025-05-31 17:42:04.115608 | 2025-05-31 17:42:04.115809 | LOOP [upload-logs : Upload console log and json output]