2025-07-29 00:01:40.487488 | Job console starting 2025-07-29 00:01:40.497853 | Updating git repos 2025-07-29 00:01:40.565240 | Cloning repos into workspace 2025-07-29 00:01:40.837731 | Restoring repo states 2025-07-29 00:01:40.897217 | Merging changes 2025-07-29 00:01:40.897239 | Checking out repos 2025-07-29 00:01:41.286686 | Preparing playbooks 2025-07-29 00:01:41.878814 | Running Ansible setup 2025-07-29 00:01:46.748498 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-29 00:01:47.654992 | 2025-07-29 00:01:47.655174 | PLAY [Base pre] 2025-07-29 00:01:47.673402 | 2025-07-29 00:01:47.673567 | TASK [Setup log path fact] 2025-07-29 00:01:47.716035 | orchestrator | ok 2025-07-29 00:01:47.734577 | 2025-07-29 00:01:47.734745 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-29 00:01:47.779973 | orchestrator | ok 2025-07-29 00:01:47.793058 | 2025-07-29 00:01:47.793222 | TASK [emit-job-header : Print job information] 2025-07-29 00:01:47.854884 | # Job Information 2025-07-29 00:01:47.855081 | Ansible Version: 2.16.14 2025-07-29 00:01:47.855119 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-07-29 00:01:47.855154 | Pipeline: periodic-midnight 2025-07-29 00:01:47.855177 | Executor: 521e9411259a 2025-07-29 00:01:47.855197 | Triggered by: https://github.com/osism/testbed 2025-07-29 00:01:47.855218 | Event ID: 0e44b324cfd74d2f9c6498b2c726b602 2025-07-29 00:01:47.862561 | 2025-07-29 00:01:47.862708 | LOOP [emit-job-header : Print node information] 2025-07-29 00:01:47.997347 | orchestrator | ok: 2025-07-29 00:01:47.997545 | orchestrator | # Node Information 2025-07-29 00:01:47.997579 | orchestrator | Inventory Hostname: orchestrator 2025-07-29 00:01:47.997605 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-29 00:01:47.997627 | orchestrator | Username: zuul-testbed02 2025-07-29 00:01:47.997648 | orchestrator | Distro: Debian 12.11 2025-07-29 00:01:47.997671 | orchestrator | Provider: static-testbed 2025-07-29 00:01:47.997692 | orchestrator | Region: 2025-07-29 00:01:47.997713 | orchestrator | Label: testbed-orchestrator 2025-07-29 00:01:47.997733 | orchestrator | Product Name: OpenStack Nova 2025-07-29 00:01:47.997752 | orchestrator | Interface IP: 81.163.193.140 2025-07-29 00:01:48.011856 | 2025-07-29 00:01:48.011989 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-29 00:01:48.657052 | orchestrator -> localhost | changed 2025-07-29 00:01:48.665749 | 2025-07-29 00:01:48.665897 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-29 00:01:50.036742 | orchestrator -> localhost | changed 2025-07-29 00:01:50.055604 | 2025-07-29 00:01:50.055740 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-29 00:01:50.343911 | orchestrator -> localhost | ok 2025-07-29 00:01:50.351556 | 2025-07-29 00:01:50.351691 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-29 00:01:50.381873 | orchestrator | ok 2025-07-29 00:01:50.399457 | orchestrator | included: /var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-29 00:01:50.407965 | 2025-07-29 00:01:50.408083 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-29 00:01:51.827067 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-29 00:01:51.827363 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/work/38c167d54a044001a5a1ff61df7ed5b1_id_rsa 2025-07-29 00:01:51.827406 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/work/38c167d54a044001a5a1ff61df7ed5b1_id_rsa.pub 2025-07-29 00:01:51.827432 | orchestrator -> localhost | The key fingerprint is: 2025-07-29 00:01:51.827456 | orchestrator -> localhost | SHA256:InsPG8wkHUnPJgK6whmK6HSSrREs/A+vdD/Mz0aIuNE zuul-build-sshkey 2025-07-29 00:01:51.827478 | orchestrator -> localhost | The key's randomart image is: 2025-07-29 00:01:51.827512 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-29 00:01:51.827535 | orchestrator -> localhost | | . . | 2025-07-29 00:01:51.827557 | orchestrator -> localhost | |o. . . + | 2025-07-29 00:01:51.827577 | orchestrator -> localhost | |o= . + + | 2025-07-29 00:01:51.827596 | orchestrator -> localhost | |*.B o + | 2025-07-29 00:01:51.827616 | orchestrator -> localhost | |*B == = S | 2025-07-29 00:01:51.827640 | orchestrator -> localhost | |+ =o+E o . | 2025-07-29 00:01:51.827662 | orchestrator -> localhost | | o .++O . | 2025-07-29 00:01:51.827682 | orchestrator -> localhost | | ..o..O.. | 2025-07-29 00:01:51.827703 | orchestrator -> localhost | | . ..=o | 2025-07-29 00:01:51.827724 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-29 00:01:51.827781 | orchestrator -> localhost | ok: Runtime: 0:00:00.840680 2025-07-29 00:01:51.836426 | 2025-07-29 00:01:51.836569 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-29 00:01:51.866378 | orchestrator | ok 2025-07-29 00:01:51.877245 | orchestrator | included: /var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-29 00:01:51.887091 | 2025-07-29 00:01:51.887225 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-29 00:01:51.911570 | orchestrator | skipping: Conditional result was False 2025-07-29 00:01:51.922584 | 2025-07-29 00:01:51.922719 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-29 00:01:52.517619 | orchestrator | changed 2025-07-29 00:01:52.526164 | 2025-07-29 00:01:52.526317 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-29 00:01:52.794518 | orchestrator | ok 2025-07-29 00:01:52.801258 | 2025-07-29 00:01:52.801397 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-29 00:01:53.238427 | orchestrator | ok 2025-07-29 00:01:53.247660 | 2025-07-29 00:01:53.247787 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-29 00:01:53.702270 | orchestrator | ok 2025-07-29 00:01:53.711357 | 2025-07-29 00:01:53.711489 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-29 00:01:53.737068 | orchestrator | skipping: Conditional result was False 2025-07-29 00:01:53.744192 | 2025-07-29 00:01:53.744349 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-29 00:01:54.206156 | orchestrator -> localhost | changed 2025-07-29 00:01:54.220783 | 2025-07-29 00:01:54.220929 | TASK [add-build-sshkey : Add back temp key] 2025-07-29 00:01:54.614257 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/work/38c167d54a044001a5a1ff61df7ed5b1_id_rsa (zuul-build-sshkey) 2025-07-29 00:01:54.614801 | orchestrator -> localhost | ok: Runtime: 0:00:00.018953 2025-07-29 00:01:54.626352 | 2025-07-29 00:01:54.626885 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-29 00:01:55.067252 | orchestrator | ok 2025-07-29 00:01:55.075771 | 2025-07-29 00:01:55.075910 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-29 00:01:55.100758 | orchestrator | skipping: Conditional result was False 2025-07-29 00:01:55.164090 | 2025-07-29 00:01:55.164230 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-29 00:01:55.735284 | orchestrator | ok 2025-07-29 00:01:55.748172 | 2025-07-29 00:01:55.748360 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-29 00:01:55.779890 | orchestrator | ok 2025-07-29 00:01:55.787616 | 2025-07-29 00:01:55.787728 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-29 00:01:56.150946 | orchestrator -> localhost | ok 2025-07-29 00:01:56.167927 | 2025-07-29 00:01:56.168114 | TASK [validate-host : Collect information about the host] 2025-07-29 00:01:57.447712 | orchestrator | ok 2025-07-29 00:01:57.466128 | 2025-07-29 00:01:57.466283 | TASK [validate-host : Sanitize hostname] 2025-07-29 00:01:57.549117 | orchestrator | ok 2025-07-29 00:01:57.555728 | 2025-07-29 00:01:57.555837 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-29 00:01:58.216811 | orchestrator -> localhost | changed 2025-07-29 00:01:58.223778 | 2025-07-29 00:01:58.223911 | TASK [validate-host : Collect information about zuul worker] 2025-07-29 00:01:58.664110 | orchestrator | ok 2025-07-29 00:01:58.672152 | 2025-07-29 00:01:58.672477 | TASK [validate-host : Write out all zuul information for each host] 2025-07-29 00:01:59.275469 | orchestrator -> localhost | changed 2025-07-29 00:01:59.286694 | 2025-07-29 00:01:59.286823 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-29 00:01:59.581801 | orchestrator | ok 2025-07-29 00:01:59.588489 | 2025-07-29 00:01:59.588601 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-29 00:02:45.009149 | orchestrator | changed: 2025-07-29 00:02:45.009549 | orchestrator | .d..t...... src/ 2025-07-29 00:02:45.009603 | orchestrator | .d..t...... src/github.com/ 2025-07-29 00:02:45.009640 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-29 00:02:45.009672 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-29 00:02:45.009703 | orchestrator | RedHat.yml 2025-07-29 00:02:45.025166 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-29 00:02:45.025184 | orchestrator | RedHat.yml 2025-07-29 00:02:45.025236 | orchestrator | = 1.53.0"... 2025-07-29 00:02:58.955576 | orchestrator | 00:02:58.955 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-29 00:02:59.131343 | orchestrator | 00:02:59.131 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-29 00:02:59.772930 | orchestrator | 00:02:59.772 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-29 00:03:00.205957 | orchestrator | 00:03:00.205 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-07-29 00:03:04.694295 | orchestrator | 00:03:04.693 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-07-29 00:03:04.771240 | orchestrator | 00:03:04.770 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-29 00:03:05.290443 | orchestrator | 00:03:05.290 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-29 00:03:05.290510 | orchestrator | 00:03:05.290 STDOUT terraform: Providers are signed by their developers. 2025-07-29 00:03:05.290533 | orchestrator | 00:03:05.290 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-29 00:03:05.290558 | orchestrator | 00:03:05.290 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-29 00:03:05.290712 | orchestrator | 00:03:05.290 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-29 00:03:05.290766 | orchestrator | 00:03:05.290 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-29 00:03:05.290815 | orchestrator | 00:03:05.290 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-29 00:03:05.290843 | orchestrator | 00:03:05.290 STDOUT terraform: you run "tofu init" in the future. 2025-07-29 00:03:05.291329 | orchestrator | 00:03:05.291 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-29 00:03:05.291412 | orchestrator | 00:03:05.291 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-29 00:03:05.291474 | orchestrator | 00:03:05.291 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-29 00:03:05.291494 | orchestrator | 00:03:05.291 STDOUT terraform: should now work. 2025-07-29 00:03:05.291549 | orchestrator | 00:03:05.291 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-29 00:03:05.291605 | orchestrator | 00:03:05.291 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-29 00:03:05.291644 | orchestrator | 00:03:05.291 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-29 00:03:05.402599 | orchestrator | 00:03:05.402 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-29 00:03:05.402675 | orchestrator | 00:03:05.402 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-29 00:03:05.605011 | orchestrator | 00:03:05.604 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-29 00:03:05.605105 | orchestrator | 00:03:05.604 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-29 00:03:05.605289 | orchestrator | 00:03:05.605 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-29 00:03:05.605337 | orchestrator | 00:03:05.605 STDOUT terraform: for this configuration. 2025-07-29 00:03:05.759712 | orchestrator | 00:03:05.759 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-29 00:03:05.759797 | orchestrator | 00:03:05.759 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-29 00:03:05.882298 | orchestrator | 00:03:05.882 STDOUT terraform: ci.auto.tfvars 2025-07-29 00:03:05.888875 | orchestrator | 00:03:05.888 STDOUT terraform: default_custom.tf 2025-07-29 00:03:06.022578 | orchestrator | 00:03:06.022 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-29 00:03:06.914202 | orchestrator | 00:03:06.914 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-29 00:03:07.468909 | orchestrator | 00:03:07.467 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-29 00:03:07.767723 | orchestrator | 00:03:07.767 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-29 00:03:07.767823 | orchestrator | 00:03:07.767 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-29 00:03:07.767830 | orchestrator | 00:03:07.767 STDOUT terraform:  + create 2025-07-29 00:03:07.767836 | orchestrator | 00:03:07.767 STDOUT terraform:  <= read (data resources) 2025-07-29 00:03:07.767843 | orchestrator | 00:03:07.767 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-29 00:03:07.768351 | orchestrator | 00:03:07.768 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-29 00:03:07.768403 | orchestrator | 00:03:07.768 STDOUT terraform:  # (config refers to values not yet known) 2025-07-29 00:03:07.768437 | orchestrator | 00:03:07.768 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-29 00:03:07.768466 | orchestrator | 00:03:07.768 STDOUT terraform:  + checksum = (known after apply) 2025-07-29 00:03:07.768495 | orchestrator | 00:03:07.768 STDOUT terraform:  + created_at = (known after apply) 2025-07-29 00:03:07.768526 | orchestrator | 00:03:07.768 STDOUT terraform:  + file = (known after apply) 2025-07-29 00:03:07.768553 | orchestrator | 00:03:07.768 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.768585 | orchestrator | 00:03:07.768 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.768614 | orchestrator | 00:03:07.768 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-29 00:03:07.768642 | orchestrator | 00:03:07.768 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-29 00:03:07.768663 | orchestrator | 00:03:07.768 STDOUT terraform:  + most_recent = true 2025-07-29 00:03:07.768690 | orchestrator | 00:03:07.768 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.768718 | orchestrator | 00:03:07.768 STDOUT terraform:  + protected = (known after apply) 2025-07-29 00:03:07.768759 | orchestrator | 00:03:07.768 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.768788 | orchestrator | 00:03:07.768 STDOUT terraform:  + schema = (known after apply) 2025-07-29 00:03:07.768816 | orchestrator | 00:03:07.768 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-29 00:03:07.768844 | orchestrator | 00:03:07.768 STDOUT terraform:  + tags = (known after apply) 2025-07-29 00:03:07.768871 | orchestrator | 00:03:07.768 STDOUT terraform:  + updated_at = (known after apply) 2025-07-29 00:03:07.768885 | orchestrator | 00:03:07.768 STDOUT terraform:  } 2025-07-29 00:03:07.768941 | orchestrator | 00:03:07.768 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-29 00:03:07.768966 | orchestrator | 00:03:07.768 STDOUT terraform:  # (config refers to values not yet known) 2025-07-29 00:03:07.769000 | orchestrator | 00:03:07.768 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-29 00:03:07.769028 | orchestrator | 00:03:07.768 STDOUT terraform:  + checksum = (known after apply) 2025-07-29 00:03:07.769056 | orchestrator | 00:03:07.769 STDOUT terraform:  + created_at = (known after apply) 2025-07-29 00:03:07.769087 | orchestrator | 00:03:07.769 STDOUT terraform:  + file = (known after apply) 2025-07-29 00:03:07.769115 | orchestrator | 00:03:07.769 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.769143 | orchestrator | 00:03:07.769 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.769170 | orchestrator | 00:03:07.769 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-29 00:03:07.769236 | orchestrator | 00:03:07.769 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-29 00:03:07.769265 | orchestrator | 00:03:07.769 STDOUT terraform:  + most_recent = true 2025-07-29 00:03:07.769290 | orchestrator | 00:03:07.769 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.769318 | orchestrator | 00:03:07.769 STDOUT terraform:  + protected = (known after apply) 2025-07-29 00:03:07.769347 | orchestrator | 00:03:07.769 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.769375 | orchestrator | 00:03:07.769 STDOUT terraform:  + schema = (known after apply) 2025-07-29 00:03:07.769404 | orchestrator | 00:03:07.769 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-29 00:03:07.769431 | orchestrator | 00:03:07.769 STDOUT terraform:  + tags = (known after apply) 2025-07-29 00:03:07.769459 | orchestrator | 00:03:07.769 STDOUT terraform:  + updated_at = (known after apply) 2025-07-29 00:03:07.769475 | orchestrator | 00:03:07.769 STDOUT terraform:  } 2025-07-29 00:03:07.769514 | orchestrator | 00:03:07.769 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-29 00:03:07.769544 | orchestrator | 00:03:07.769 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-29 00:03:07.769578 | orchestrator | 00:03:07.769 STDOUT terraform:  + content = (known after apply) 2025-07-29 00:03:07.769612 | orchestrator | 00:03:07.769 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-29 00:03:07.769645 | orchestrator | 00:03:07.769 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-29 00:03:07.769680 | orchestrator | 00:03:07.769 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-29 00:03:07.769714 | orchestrator | 00:03:07.769 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-29 00:03:07.769773 | orchestrator | 00:03:07.769 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-29 00:03:07.769791 | orchestrator | 00:03:07.769 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-29 00:03:07.769817 | orchestrator | 00:03:07.769 STDOUT terraform:  + directory_permission = "0777" 2025-07-29 00:03:07.769840 | orchestrator | 00:03:07.769 STDOUT terraform:  + file_permission = "0644" 2025-07-29 00:03:07.769876 | orchestrator | 00:03:07.769 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-29 00:03:07.769910 | orchestrator | 00:03:07.769 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.769924 | orchestrator | 00:03:07.769 STDOUT terraform:  } 2025-07-29 00:03:07.769950 | orchestrator | 00:03:07.769 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-29 00:03:07.769974 | orchestrator | 00:03:07.769 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-29 00:03:07.770026 | orchestrator | 00:03:07.769 STDOUT terraform:  + content = (known after apply) 2025-07-29 00:03:07.770060 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-29 00:03:07.770094 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-29 00:03:07.770130 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-29 00:03:07.770165 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-29 00:03:07.770201 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-29 00:03:07.770235 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-29 00:03:07.770258 | orchestrator | 00:03:07.770 STDOUT terraform:  + directory_permission = "0777" 2025-07-29 00:03:07.770281 | orchestrator | 00:03:07.770 STDOUT terraform:  + file_permission = "0644" 2025-07-29 00:03:07.770311 | orchestrator | 00:03:07.770 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-29 00:03:07.770346 | orchestrator | 00:03:07.770 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.770363 | orchestrator | 00:03:07.770 STDOUT terraform:  } 2025-07-29 00:03:07.770390 | orchestrator | 00:03:07.770 STDOUT terraform:  # local_file.inventory will be created 2025-07-29 00:03:07.770409 | orchestrator | 00:03:07.770 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-29 00:03:07.770444 | orchestrator | 00:03:07.770 STDOUT terraform:  + content = (known after apply) 2025-07-29 00:03:07.770479 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-29 00:03:07.770511 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-29 00:03:07.770545 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-29 00:03:07.770580 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-29 00:03:07.770616 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-29 00:03:07.770648 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-29 00:03:07.770671 | orchestrator | 00:03:07.770 STDOUT terraform:  + directory_permission = "0777" 2025-07-29 00:03:07.770694 | orchestrator | 00:03:07.770 STDOUT terraform:  + file_permission = "0644" 2025-07-29 00:03:07.770723 | orchestrator | 00:03:07.770 STDOUT terraform:  + filename = "inventory.ci" 2025-07-29 00:03:07.770785 | orchestrator | 00:03:07.770 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.770791 | orchestrator | 00:03:07.770 STDOUT terraform:  } 2025-07-29 00:03:07.770832 | orchestrator | 00:03:07.770 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-29 00:03:07.770860 | orchestrator | 00:03:07.770 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-29 00:03:07.770890 | orchestrator | 00:03:07.770 STDOUT terraform:  + content = (sensitive value) 2025-07-29 00:03:07.770923 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-29 00:03:07.770956 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-29 00:03:07.770993 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-29 00:03:07.771028 | orchestrator | 00:03:07.770 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-29 00:03:07.771062 | orchestrator | 00:03:07.771 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-29 00:03:07.771096 | orchestrator | 00:03:07.771 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-29 00:03:07.771121 | orchestrator | 00:03:07.771 STDOUT terraform:  + directory_permission = "0700" 2025-07-29 00:03:07.771144 | orchestrator | 00:03:07.771 STDOUT terraform:  + file_permission = "0600" 2025-07-29 00:03:07.771172 | orchestrator | 00:03:07.771 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-29 00:03:07.771206 | orchestrator | 00:03:07.771 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.771213 | orchestrator | 00:03:07.771 STDOUT terraform:  } 2025-07-29 00:03:07.771245 | orchestrator | 00:03:07.771 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-29 00:03:07.771273 | orchestrator | 00:03:07.771 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-29 00:03:07.771293 | orchestrator | 00:03:07.771 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.771299 | orchestrator | 00:03:07.771 STDOUT terraform:  } 2025-07-29 00:03:07.771351 | orchestrator | 00:03:07.771 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-29 00:03:07.771398 | orchestrator | 00:03:07.771 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-29 00:03:07.771429 | orchestrator | 00:03:07.771 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.771452 | orchestrator | 00:03:07.771 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.771486 | orchestrator | 00:03:07.771 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.771521 | orchestrator | 00:03:07.771 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.771555 | orchestrator | 00:03:07.771 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.771599 | orchestrator | 00:03:07.771 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-29 00:03:07.771633 | orchestrator | 00:03:07.771 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.771654 | orchestrator | 00:03:07.771 STDOUT terraform:  + size = 80 2025-07-29 00:03:07.771677 | orchestrator | 00:03:07.771 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.771700 | orchestrator | 00:03:07.771 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.771760 | orchestrator | 00:03:07.771 STDOUT terraform:  } 2025-07-29 00:03:07.771912 | orchestrator | 00:03:07.771 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-29 00:03:07.771958 | orchestrator | 00:03:07.771 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-29 00:03:07.771992 | orchestrator | 00:03:07.771 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.772016 | orchestrator | 00:03:07.771 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.772050 | orchestrator | 00:03:07.772 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.772087 | orchestrator | 00:03:07.772 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.772122 | orchestrator | 00:03:07.772 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.772165 | orchestrator | 00:03:07.772 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-29 00:03:07.772201 | orchestrator | 00:03:07.772 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.772224 | orchestrator | 00:03:07.772 STDOUT terraform:  + size = 80 2025-07-29 00:03:07.772247 | orchestrator | 00:03:07.772 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.772272 | orchestrator | 00:03:07.772 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.772278 | orchestrator | 00:03:07.772 STDOUT terraform:  } 2025-07-29 00:03:07.772428 | orchestrator | 00:03:07.772 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-29 00:03:07.772474 | orchestrator | 00:03:07.772 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-29 00:03:07.772510 | orchestrator | 00:03:07.772 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.772533 | orchestrator | 00:03:07.772 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.772570 | orchestrator | 00:03:07.772 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.772605 | orchestrator | 00:03:07.772 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.772640 | orchestrator | 00:03:07.772 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.772684 | orchestrator | 00:03:07.772 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-29 00:03:07.772718 | orchestrator | 00:03:07.772 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.772739 | orchestrator | 00:03:07.772 STDOUT terraform:  + size = 80 2025-07-29 00:03:07.772772 | orchestrator | 00:03:07.772 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.772797 | orchestrator | 00:03:07.772 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.772803 | orchestrator | 00:03:07.772 STDOUT terraform:  } 2025-07-29 00:03:07.773406 | orchestrator | 00:03:07.773 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-29 00:03:07.773439 | orchestrator | 00:03:07.773 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-29 00:03:07.773481 | orchestrator | 00:03:07.773 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.773510 | orchestrator | 00:03:07.773 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.773558 | orchestrator | 00:03:07.773 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.773565 | orchestrator | 00:03:07.773 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.773617 | orchestrator | 00:03:07.773 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.773645 | orchestrator | 00:03:07.773 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-29 00:03:07.773688 | orchestrator | 00:03:07.773 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.773694 | orchestrator | 00:03:07.773 STDOUT terraform:  + size = 80 2025-07-29 00:03:07.773720 | orchestrator | 00:03:07.773 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.773758 | orchestrator | 00:03:07.773 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.773789 | orchestrator | 00:03:07.773 STDOUT terraform:  } 2025-07-29 00:03:07.773936 | orchestrator | 00:03:07.773 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-29 00:03:07.773986 | orchestrator | 00:03:07.773 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-29 00:03:07.774024 | orchestrator | 00:03:07.773 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.774062 | orchestrator | 00:03:07.774 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.774099 | orchestrator | 00:03:07.774 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.774145 | orchestrator | 00:03:07.774 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.774204 | orchestrator | 00:03:07.774 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.774263 | orchestrator | 00:03:07.774 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-29 00:03:07.774306 | orchestrator | 00:03:07.774 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.774323 | orchestrator | 00:03:07.774 STDOUT terraform:  + size = 80 2025-07-29 00:03:07.774347 | orchestrator | 00:03:07.774 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.774378 | orchestrator | 00:03:07.774 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.774383 | orchestrator | 00:03:07.774 STDOUT terraform:  } 2025-07-29 00:03:07.774515 | orchestrator | 00:03:07.774 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-29 00:03:07.774562 | orchestrator | 00:03:07.774 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-29 00:03:07.774593 | orchestrator | 00:03:07.774 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.774634 | orchestrator | 00:03:07.774 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.774655 | orchestrator | 00:03:07.774 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.774690 | orchestrator | 00:03:07.774 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.774724 | orchestrator | 00:03:07.774 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.774804 | orchestrator | 00:03:07.774 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-29 00:03:07.774836 | orchestrator | 00:03:07.774 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.774858 | orchestrator | 00:03:07.774 STDOUT terraform:  + size = 80 2025-07-29 00:03:07.774887 | orchestrator | 00:03:07.774 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.774907 | orchestrator | 00:03:07.774 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.774913 | orchestrator | 00:03:07.774 STDOUT terraform:  } 2025-07-29 00:03:07.775049 | orchestrator | 00:03:07.774 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-29 00:03:07.775094 | orchestrator | 00:03:07.775 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-29 00:03:07.775129 | orchestrator | 00:03:07.775 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.775160 | orchestrator | 00:03:07.775 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.775188 | orchestrator | 00:03:07.775 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.775245 | orchestrator | 00:03:07.775 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.775252 | orchestrator | 00:03:07.775 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.775299 | orchestrator | 00:03:07.775 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-29 00:03:07.775328 | orchestrator | 00:03:07.775 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.775349 | orchestrator | 00:03:07.775 STDOUT terraform:  + size = 80 2025-07-29 00:03:07.775374 | orchestrator | 00:03:07.775 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.775410 | orchestrator | 00:03:07.775 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.775415 | orchestrator | 00:03:07.775 STDOUT terraform:  } 2025-07-29 00:03:07.778292 | orchestrator | 00:03:07.778 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-29 00:03:07.778373 | orchestrator | 00:03:07.778 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.778437 | orchestrator | 00:03:07.778 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.778471 | orchestrator | 00:03:07.778 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.778530 | orchestrator | 00:03:07.778 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.778590 | orchestrator | 00:03:07.778 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.778636 | orchestrator | 00:03:07.778 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-29 00:03:07.778693 | orchestrator | 00:03:07.778 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.778739 | orchestrator | 00:03:07.778 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.778790 | orchestrator | 00:03:07.778 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.778836 | orchestrator | 00:03:07.778 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.778858 | orchestrator | 00:03:07.778 STDOUT terraform:  } 2025-07-29 00:03:07.778922 | orchestrator | 00:03:07.778 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-29 00:03:07.778987 | orchestrator | 00:03:07.778 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.779040 | orchestrator | 00:03:07.778 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.779079 | orchestrator | 00:03:07.779 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.779136 | orchestrator | 00:03:07.779 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.779177 | orchestrator | 00:03:07.779 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.779236 | orchestrator | 00:03:07.779 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-29 00:03:07.779291 | orchestrator | 00:03:07.779 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.779322 | orchestrator | 00:03:07.779 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.779366 | orchestrator | 00:03:07.779 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.779397 | orchestrator | 00:03:07.779 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.779416 | orchestrator | 00:03:07.779 STDOUT terraform:  } 2025-07-29 00:03:07.779480 | orchestrator | 00:03:07.779 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-29 00:03:07.779544 | orchestrator | 00:03:07.779 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.779601 | orchestrator | 00:03:07.779 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.779641 | orchestrator | 00:03:07.779 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.779698 | orchestrator | 00:03:07.779 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.779764 | orchestrator | 00:03:07.779 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.779810 | orchestrator | 00:03:07.779 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-29 00:03:07.779865 | orchestrator | 00:03:07.779 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.779892 | orchestrator | 00:03:07.779 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.779942 | orchestrator | 00:03:07.779 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.779974 | orchestrator | 00:03:07.779 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.780008 | orchestrator | 00:03:07.779 STDOUT terraform:  } 2025-07-29 00:03:07.780067 | orchestrator | 00:03:07.780 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-29 00:03:07.780121 | orchestrator | 00:03:07.780 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.780176 | orchestrator | 00:03:07.780 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.780206 | orchestrator | 00:03:07.780 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.780263 | orchestrator | 00:03:07.780 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.780318 | orchestrator | 00:03:07.780 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.780363 | orchestrator | 00:03:07.780 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-29 00:03:07.780418 | orchestrator | 00:03:07.780 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.780445 | orchestrator | 00:03:07.780 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.780488 | orchestrator | 00:03:07.780 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.780519 | orchestrator | 00:03:07.780 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.780553 | orchestrator | 00:03:07.780 STDOUT terraform:  } 2025-07-29 00:03:07.780616 | orchestrator | 00:03:07.780 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-29 00:03:07.780669 | orchestrator | 00:03:07.780 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.780724 | orchestrator | 00:03:07.780 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.780781 | orchestrator | 00:03:07.780 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.780832 | orchestrator | 00:03:07.780 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.780887 | orchestrator | 00:03:07.780 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.780932 | orchestrator | 00:03:07.780 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-29 00:03:07.780987 | orchestrator | 00:03:07.780 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.781035 | orchestrator | 00:03:07.781 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.781067 | orchestrator | 00:03:07.781 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.781112 | orchestrator | 00:03:07.781 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.781135 | orchestrator | 00:03:07.781 STDOUT terraform:  } 2025-07-29 00:03:07.781199 | orchestrator | 00:03:07.781 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-29 00:03:07.781256 | orchestrator | 00:03:07.781 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.781303 | orchestrator | 00:03:07.781 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.781340 | orchestrator | 00:03:07.781 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.781387 | orchestrator | 00:03:07.781 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.781443 | orchestrator | 00:03:07.781 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.781487 | orchestrator | 00:03:07.781 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-29 00:03:07.781542 | orchestrator | 00:03:07.781 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.781583 | orchestrator | 00:03:07.781 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.781621 | orchestrator | 00:03:07.781 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.781652 | orchestrator | 00:03:07.781 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.781685 | orchestrator | 00:03:07.781 STDOUT terraform:  } 2025-07-29 00:03:07.781733 | orchestrator | 00:03:07.781 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-29 00:03:07.781825 | orchestrator | 00:03:07.781 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.781882 | orchestrator | 00:03:07.781 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.781913 | orchestrator | 00:03:07.781 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.781971 | orchestrator | 00:03:07.781 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.782038 | orchestrator | 00:03:07.781 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.782102 | orchestrator | 00:03:07.782 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-29 00:03:07.782147 | orchestrator | 00:03:07.782 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.782183 | orchestrator | 00:03:07.782 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.782222 | orchestrator | 00:03:07.782 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.782253 | orchestrator | 00:03:07.782 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.782274 | orchestrator | 00:03:07.782 STDOUT terraform:  } 2025-07-29 00:03:07.782338 | orchestrator | 00:03:07.782 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-29 00:03:07.782388 | orchestrator | 00:03:07.782 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.782436 | orchestrator | 00:03:07.782 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.782467 | orchestrator | 00:03:07.782 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.782517 | orchestrator | 00:03:07.782 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.782575 | orchestrator | 00:03:07.782 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.782620 | orchestrator | 00:03:07.782 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-29 00:03:07.782663 | orchestrator | 00:03:07.782 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.782692 | orchestrator | 00:03:07.782 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.782722 | orchestrator | 00:03:07.782 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.782763 | orchestrator | 00:03:07.782 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.782784 | orchestrator | 00:03:07.782 STDOUT terraform:  } 2025-07-29 00:03:07.782834 | orchestrator | 00:03:07.782 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-29 00:03:07.782884 | orchestrator | 00:03:07.782 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-29 00:03:07.782963 | orchestrator | 00:03:07.782 STDOUT terraform:  + attachment = (known after apply) 2025-07-29 00:03:07.783006 | orchestrator | 00:03:07.782 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.783050 | orchestrator | 00:03:07.783 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.783091 | orchestrator | 00:03:07.783 STDOUT terraform:  + metadata = (known after apply) 2025-07-29 00:03:07.783136 | orchestrator | 00:03:07.783 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-29 00:03:07.783178 | orchestrator | 00:03:07.783 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.783207 | orchestrator | 00:03:07.783 STDOUT terraform:  + size = 20 2025-07-29 00:03:07.783238 | orchestrator | 00:03:07.783 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-29 00:03:07.783270 | orchestrator | 00:03:07.783 STDOUT terraform:  + volume_type = "ssd" 2025-07-29 00:03:07.783291 | orchestrator | 00:03:07.783 STDOUT terraform:  } 2025-07-29 00:03:07.783342 | orchestrator | 00:03:07.783 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-29 00:03:07.783392 | orchestrator | 00:03:07.783 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-29 00:03:07.783434 | orchestrator | 00:03:07.783 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-29 00:03:07.783475 | orchestrator | 00:03:07.783 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-29 00:03:07.783516 | orchestrator | 00:03:07.783 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-29 00:03:07.783557 | orchestrator | 00:03:07.783 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.783589 | orchestrator | 00:03:07.783 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.783637 | orchestrator | 00:03:07.783 STDOUT terraform:  + config_drive = true 2025-07-29 00:03:07.783679 | orchestrator | 00:03:07.783 STDOUT terraform:  + created = (known after apply) 2025-07-29 00:03:07.783720 | orchestrator | 00:03:07.783 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-29 00:03:07.783785 | orchestrator | 00:03:07.783 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-29 00:03:07.783817 | orchestrator | 00:03:07.783 STDOUT terraform:  + force_delete = false 2025-07-29 00:03:07.783860 | orchestrator | 00:03:07.783 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-29 00:03:07.783903 | orchestrator | 00:03:07.783 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.783945 | orchestrator | 00:03:07.783 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.783986 | orchestrator | 00:03:07.783 STDOUT terraform:  + image_name = (known after apply) 2025-07-29 00:03:07.784019 | orchestrator | 00:03:07.783 STDOUT terraform:  + key_pair = "testbed" 2025-07-29 00:03:07.784057 | orchestrator | 00:03:07.784 STDOUT terraform:  + name = "testbed-manager" 2025-07-29 00:03:07.784089 | orchestrator | 00:03:07.784 STDOUT terraform:  + power_state = "active" 2025-07-29 00:03:07.784130 | orchestrator | 00:03:07.784 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.784173 | orchestrator | 00:03:07.784 STDOUT terraform:  + security_groups = (known after apply) 2025-07-29 00:03:07.784204 | orchestrator | 00:03:07.784 STDOUT terraform:  + stop_before_destroy = false 2025-07-29 00:03:07.784247 | orchestrator | 00:03:07.784 STDOUT terraform:  + updated = (known after apply) 2025-07-29 00:03:07.784295 | orchestrator | 00:03:07.784 STDOUT terraform:  + user_data = (sensitive value) 2025-07-29 00:03:07.784322 | orchestrator | 00:03:07.784 STDOUT terraform:  + block_device { 2025-07-29 00:03:07.784354 | orchestrator | 00:03:07.784 STDOUT terraform:  + boot_index = 0 2025-07-29 00:03:07.784389 | orchestrator | 00:03:07.784 STDOUT terraform:  + delete_on_termination = false 2025-07-29 00:03:07.784425 | orchestrator | 00:03:07.784 STDOUT terraform:  + destination_type = "volume" 2025-07-29 00:03:07.784460 | orchestrator | 00:03:07.784 STDOUT terraform:  + multiattach = false 2025-07-29 00:03:07.784497 | orchestrator | 00:03:07.784 STDOUT terraform:  + source_type = "volume" 2025-07-29 00:03:07.784542 | orchestrator | 00:03:07.784 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.784564 | orchestrator | 00:03:07.784 STDOUT terraform:  } 2025-07-29 00:03:07.784586 | orchestrator | 00:03:07.784 STDOUT terraform:  + network { 2025-07-29 00:03:07.784620 | orchestrator | 00:03:07.784 STDOUT terraform:  + access_network = false 2025-07-29 00:03:07.784658 | orchestrator | 00:03:07.784 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-29 00:03:07.784695 | orchestrator | 00:03:07.784 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-29 00:03:07.784735 | orchestrator | 00:03:07.784 STDOUT terraform:  + mac = (known after apply) 2025-07-29 00:03:07.784789 | orchestrator | 00:03:07.784 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.784828 | orchestrator | 00:03:07.784 STDOUT terraform:  + port = (known after apply) 2025-07-29 00:03:07.784866 | orchestrator | 00:03:07.784 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.784888 | orchestrator | 00:03:07.784 STDOUT terraform:  } 2025-07-29 00:03:07.784910 | orchestrator | 00:03:07.784 STDOUT terraform:  } 2025-07-29 00:03:07.784973 | orchestrator | 00:03:07.784 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-29 00:03:07.785022 | orchestrator | 00:03:07.784 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-29 00:03:07.785065 | orchestrator | 00:03:07.785 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-29 00:03:07.785112 | orchestrator | 00:03:07.785 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-29 00:03:07.785155 | orchestrator | 00:03:07.785 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-29 00:03:07.785198 | orchestrator | 00:03:07.785 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.785229 | orchestrator | 00:03:07.785 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.785257 | orchestrator | 00:03:07.785 STDOUT terraform:  + config_drive = true 2025-07-29 00:03:07.785299 | orchestrator | 00:03:07.785 STDOUT terraform:  + created = (known after apply) 2025-07-29 00:03:07.785340 | orchestrator | 00:03:07.785 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-29 00:03:07.785378 | orchestrator | 00:03:07.785 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-29 00:03:07.785409 | orchestrator | 00:03:07.785 STDOUT terraform:  + force_delete = false 2025-07-29 00:03:07.785451 | orchestrator | 00:03:07.785 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-29 00:03:07.785498 | orchestrator | 00:03:07.785 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.785542 | orchestrator | 00:03:07.785 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.785584 | orchestrator | 00:03:07.785 STDOUT terraform:  + image_name = (known after apply) 2025-07-29 00:03:07.785616 | orchestrator | 00:03:07.785 STDOUT terraform:  + key_pair = "testbed" 2025-07-29 00:03:07.785654 | orchestrator | 00:03:07.785 STDOUT terraform:  + name = "testbed-node-0" 2025-07-29 00:03:07.785685 | orchestrator | 00:03:07.785 STDOUT terraform:  + power_state = "active" 2025-07-29 00:03:07.785726 | orchestrator | 00:03:07.785 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.785782 | orchestrator | 00:03:07.785 STDOUT terraform:  + security_groups = (known after apply) 2025-07-29 00:03:07.785812 | orchestrator | 00:03:07.785 STDOUT terraform:  + stop_before_destroy = false 2025-07-29 00:03:07.785853 | orchestrator | 00:03:07.785 STDOUT terraform:  + updated = (known after apply) 2025-07-29 00:03:07.785909 | orchestrator | 00:03:07.785 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-29 00:03:07.785933 | orchestrator | 00:03:07.785 STDOUT terraform:  + block_device { 2025-07-29 00:03:07.785970 | orchestrator | 00:03:07.785 STDOUT terraform:  + boot_index = 0 2025-07-29 00:03:07.786004 | orchestrator | 00:03:07.785 STDOUT terraform:  + delete_on_termination = false 2025-07-29 00:03:07.786052 | orchestrator | 00:03:07.786 STDOUT terraform:  + destination_type = "volume" 2025-07-29 00:03:07.786088 | orchestrator | 00:03:07.786 STDOUT terraform:  + multiattach = false 2025-07-29 00:03:07.786125 | orchestrator | 00:03:07.786 STDOUT terraform:  + source_type = "volume" 2025-07-29 00:03:07.786169 | orchestrator | 00:03:07.786 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.786190 | orchestrator | 00:03:07.786 STDOUT terraform:  } 2025-07-29 00:03:07.786212 | orchestrator | 00:03:07.786 STDOUT terraform:  + network { 2025-07-29 00:03:07.786239 | orchestrator | 00:03:07.786 STDOUT terraform:  + access_network = false 2025-07-29 00:03:07.786278 | orchestrator | 00:03:07.786 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-29 00:03:07.786315 | orchestrator | 00:03:07.786 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-29 00:03:07.786354 | orchestrator | 00:03:07.786 STDOUT terraform:  + mac = (known after apply) 2025-07-29 00:03:07.786391 | orchestrator | 00:03:07.786 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.786429 | orchestrator | 00:03:07.786 STDOUT terraform:  + port = (known after apply) 2025-07-29 00:03:07.786466 | orchestrator | 00:03:07.786 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.786487 | orchestrator | 00:03:07.786 STDOUT terraform:  } 2025-07-29 00:03:07.786507 | orchestrator | 00:03:07.786 STDOUT terraform:  } 2025-07-29 00:03:07.786555 | orchestrator | 00:03:07.786 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-29 00:03:07.786602 | orchestrator | 00:03:07.786 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-29 00:03:07.786644 | orchestrator | 00:03:07.786 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-29 00:03:07.786685 | orchestrator | 00:03:07.786 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-29 00:03:07.786726 | orchestrator | 00:03:07.786 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-29 00:03:07.786789 | orchestrator | 00:03:07.786 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.786821 | orchestrator | 00:03:07.786 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.786848 | orchestrator | 00:03:07.786 STDOUT terraform:  + config_drive = true 2025-07-29 00:03:07.786891 | orchestrator | 00:03:07.786 STDOUT terraform:  + created = (known after apply) 2025-07-29 00:03:07.786931 | orchestrator | 00:03:07.786 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-29 00:03:07.786967 | orchestrator | 00:03:07.786 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-29 00:03:07.786999 | orchestrator | 00:03:07.786 STDOUT terraform:  + force_delete = false 2025-07-29 00:03:07.787042 | orchestrator | 00:03:07.787 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-29 00:03:07.787090 | orchestrator | 00:03:07.787 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.787131 | orchestrator | 00:03:07.787 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.787171 | orchestrator | 00:03:07.787 STDOUT terraform:  + image_name = (known after apply) 2025-07-29 00:03:07.787202 | orchestrator | 00:03:07.787 STDOUT terraform:  + key_pair = "testbed" 2025-07-29 00:03:07.787239 | orchestrator | 00:03:07.787 STDOUT terraform:  + name = "testbed-node-1" 2025-07-29 00:03:07.787269 | orchestrator | 00:03:07.787 STDOUT terraform:  + power_state = "active" 2025-07-29 00:03:07.787310 | orchestrator | 00:03:07.787 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.787351 | orchestrator | 00:03:07.787 STDOUT terraform:  + security_groups = (known after apply) 2025-07-29 00:03:07.787382 | orchestrator | 00:03:07.787 STDOUT terraform:  + stop_before_destroy = false 2025-07-29 00:03:07.787423 | orchestrator | 00:03:07.787 STDOUT terraform:  + updated = (known after apply) 2025-07-29 00:03:07.787480 | orchestrator | 00:03:07.787 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-29 00:03:07.787505 | orchestrator | 00:03:07.787 STDOUT terraform:  + block_device { 2025-07-29 00:03:07.787536 | orchestrator | 00:03:07.787 STDOUT terraform:  + boot_index = 0 2025-07-29 00:03:07.787581 | orchestrator | 00:03:07.787 STDOUT terraform:  + delete_on_termination = false 2025-07-29 00:03:07.787623 | orchestrator | 00:03:07.787 STDOUT terraform:  + destination_type = "volume" 2025-07-29 00:03:07.787658 | orchestrator | 00:03:07.787 STDOUT terraform:  + multiattach = false 2025-07-29 00:03:07.787694 | orchestrator | 00:03:07.787 STDOUT terraform:  + source_type = "volume" 2025-07-29 00:03:07.787737 | orchestrator | 00:03:07.787 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.787770 | orchestrator | 00:03:07.787 STDOUT terraform:  } 2025-07-29 00:03:07.787791 | orchestrator | 00:03:07.787 STDOUT terraform:  + network { 2025-07-29 00:03:07.787820 | orchestrator | 00:03:07.787 STDOUT terraform:  + access_network = false 2025-07-29 00:03:07.787857 | orchestrator | 00:03:07.787 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-29 00:03:07.787894 | orchestrator | 00:03:07.787 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-29 00:03:07.787932 | orchestrator | 00:03:07.787 STDOUT terraform:  + mac = (known after apply) 2025-07-29 00:03:07.787969 | orchestrator | 00:03:07.787 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.788007 | orchestrator | 00:03:07.787 STDOUT terraform:  + port = (known after apply) 2025-07-29 00:03:07.788044 | orchestrator | 00:03:07.788 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.788064 | orchestrator | 00:03:07.788 STDOUT terraform:  } 2025-07-29 00:03:07.788084 | orchestrator | 00:03:07.788 STDOUT terraform:  } 2025-07-29 00:03:07.788153 | orchestrator | 00:03:07.788 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-29 00:03:07.788201 | orchestrator | 00:03:07.788 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-29 00:03:07.788249 | orchestrator | 00:03:07.788 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-29 00:03:07.788291 | orchestrator | 00:03:07.788 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-29 00:03:07.788332 | orchestrator | 00:03:07.788 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-29 00:03:07.788373 | orchestrator | 00:03:07.788 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.788404 | orchestrator | 00:03:07.788 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.788431 | orchestrator | 00:03:07.788 STDOUT terraform:  + config_drive = true 2025-07-29 00:03:07.788473 | orchestrator | 00:03:07.788 STDOUT terraform:  + created = (known after apply) 2025-07-29 00:03:07.788514 | orchestrator | 00:03:07.788 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-29 00:03:07.788551 | orchestrator | 00:03:07.788 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-29 00:03:07.788582 | orchestrator | 00:03:07.788 STDOUT terraform:  + force_delete = false 2025-07-29 00:03:07.788630 | orchestrator | 00:03:07.788 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-29 00:03:07.788674 | orchestrator | 00:03:07.788 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.788718 | orchestrator | 00:03:07.788 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.788785 | orchestrator | 00:03:07.788 STDOUT terraform:  + image_name = (known after apply) 2025-07-29 00:03:07.788819 | orchestrator | 00:03:07.788 STDOUT terraform:  + key_pair = "testbed" 2025-07-29 00:03:07.788856 | orchestrator | 00:03:07.788 STDOUT terraform:  + name = "testbed-node-2" 2025-07-29 00:03:07.788886 | orchestrator | 00:03:07.788 STDOUT terraform:  + power_state = "active" 2025-07-29 00:03:07.788927 | orchestrator | 00:03:07.788 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.788969 | orchestrator | 00:03:07.788 STDOUT terraform:  + security_groups = (known after apply) 2025-07-29 00:03:07.788998 | orchestrator | 00:03:07.788 STDOUT terraform:  + stop_before_destroy = false 2025-07-29 00:03:07.789041 | orchestrator | 00:03:07.789 STDOUT terraform:  + updated = (known after apply) 2025-07-29 00:03:07.789098 | orchestrator | 00:03:07.789 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-29 00:03:07.789122 | orchestrator | 00:03:07.789 STDOUT terraform:  + block_device { 2025-07-29 00:03:07.789153 | orchestrator | 00:03:07.789 STDOUT terraform:  + boot_index = 0 2025-07-29 00:03:07.789187 | orchestrator | 00:03:07.789 STDOUT terraform:  + delete_on_termination = false 2025-07-29 00:03:07.789224 | orchestrator | 00:03:07.789 STDOUT terraform:  + destination_type = "volume" 2025-07-29 00:03:07.789259 | orchestrator | 00:03:07.789 STDOUT terraform:  + multiattach = false 2025-07-29 00:03:07.789296 | orchestrator | 00:03:07.789 STDOUT terraform:  + source_type = "volume" 2025-07-29 00:03:07.789339 | orchestrator | 00:03:07.789 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.789365 | orchestrator | 00:03:07.789 STDOUT terraform:  } 2025-07-29 00:03:07.789388 | orchestrator | 00:03:07.789 STDOUT terraform:  + network { 2025-07-29 00:03:07.789416 | orchestrator | 00:03:07.789 STDOUT terraform:  + access_network = false 2025-07-29 00:03:07.789453 | orchestrator | 00:03:07.789 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-29 00:03:07.789490 | orchestrator | 00:03:07.789 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-29 00:03:07.789528 | orchestrator | 00:03:07.789 STDOUT terraform:  + mac = (known after apply) 2025-07-29 00:03:07.789564 | orchestrator | 00:03:07.789 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.789603 | orchestrator | 00:03:07.789 STDOUT terraform:  + port = (known after apply) 2025-07-29 00:03:07.789640 | orchestrator | 00:03:07.789 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.789662 | orchestrator | 00:03:07.789 STDOUT terraform:  } 2025-07-29 00:03:07.789686 | orchestrator | 00:03:07.789 STDOUT terraform:  } 2025-07-29 00:03:07.789735 | orchestrator | 00:03:07.789 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-29 00:03:07.789794 | orchestrator | 00:03:07.789 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-29 00:03:07.789837 | orchestrator | 00:03:07.789 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-29 00:03:07.789878 | orchestrator | 00:03:07.789 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-29 00:03:07.789919 | orchestrator | 00:03:07.789 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-29 00:03:07.789960 | orchestrator | 00:03:07.789 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.789989 | orchestrator | 00:03:07.789 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.790028 | orchestrator | 00:03:07.789 STDOUT terraform:  + config_drive = true 2025-07-29 00:03:07.790071 | orchestrator | 00:03:07.790 STDOUT terraform:  + created = (known after apply) 2025-07-29 00:03:07.790112 | orchestrator | 00:03:07.790 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-29 00:03:07.790147 | orchestrator | 00:03:07.790 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-29 00:03:07.790177 | orchestrator | 00:03:07.790 STDOUT terraform:  + force_delete = false 2025-07-29 00:03:07.790218 | orchestrator | 00:03:07.790 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-29 00:03:07.790259 | orchestrator | 00:03:07.790 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.790300 | orchestrator | 00:03:07.790 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.790341 | orchestrator | 00:03:07.790 STDOUT terraform:  + image_name = (known after apply) 2025-07-29 00:03:07.790372 | orchestrator | 00:03:07.790 STDOUT terraform:  + key_pair = "testbed" 2025-07-29 00:03:07.790409 | orchestrator | 00:03:07.790 STDOUT terraform:  + name = "testbed-node-3" 2025-07-29 00:03:07.790440 | orchestrator | 00:03:07.790 STDOUT terraform:  + power_state = "active" 2025-07-29 00:03:07.790486 | orchestrator | 00:03:07.790 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.790528 | orchestrator | 00:03:07.790 STDOUT terraform:  + security_groups = (known after apply) 2025-07-29 00:03:07.790559 | orchestrator | 00:03:07.790 STDOUT terraform:  + stop_before_destroy = false 2025-07-29 00:03:07.790600 | orchestrator | 00:03:07.790 STDOUT terraform:  + updated = (known after apply) 2025-07-29 00:03:07.790657 | orchestrator | 00:03:07.790 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-29 00:03:07.790681 | orchestrator | 00:03:07.790 STDOUT terraform:  + block_device { 2025-07-29 00:03:07.790712 | orchestrator | 00:03:07.790 STDOUT terraform:  + boot_index = 0 2025-07-29 00:03:07.790769 | orchestrator | 00:03:07.790 STDOUT terraform:  + delete_on_termination = false 2025-07-29 00:03:07.790809 | orchestrator | 00:03:07.790 STDOUT terraform:  + destination_type = "volume" 2025-07-29 00:03:07.790846 | orchestrator | 00:03:07.790 STDOUT terraform:  + multiattach = false 2025-07-29 00:03:07.790882 | orchestrator | 00:03:07.790 STDOUT terraform:  + source_type = "volume" 2025-07-29 00:03:07.790926 | orchestrator | 00:03:07.790 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.790947 | orchestrator | 00:03:07.790 STDOUT terraform:  } 2025-07-29 00:03:07.790969 | orchestrator | 00:03:07.790 STDOUT terraform:  + network { 2025-07-29 00:03:07.790997 | orchestrator | 00:03:07.790 STDOUT terraform:  + access_network = false 2025-07-29 00:03:07.791035 | orchestrator | 00:03:07.791 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-29 00:03:07.791071 | orchestrator | 00:03:07.791 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-29 00:03:07.791110 | orchestrator | 00:03:07.791 STDOUT terraform:  + mac = (known after apply) 2025-07-29 00:03:07.791147 | orchestrator | 00:03:07.791 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.791185 | orchestrator | 00:03:07.791 STDOUT terraform:  + port = (known after apply) 2025-07-29 00:03:07.791224 | orchestrator | 00:03:07.791 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.791245 | orchestrator | 00:03:07.791 STDOUT terraform:  } 2025-07-29 00:03:07.791266 | orchestrator | 00:03:07.791 STDOUT terraform:  } 2025-07-29 00:03:07.791314 | orchestrator | 00:03:07.791 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-29 00:03:07.791361 | orchestrator | 00:03:07.791 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-29 00:03:07.791405 | orchestrator | 00:03:07.791 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-29 00:03:07.791446 | orchestrator | 00:03:07.791 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-29 00:03:07.791487 | orchestrator | 00:03:07.791 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-29 00:03:07.791527 | orchestrator | 00:03:07.791 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.791558 | orchestrator | 00:03:07.791 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.791591 | orchestrator | 00:03:07.791 STDOUT terraform:  + config_drive = true 2025-07-29 00:03:07.791637 | orchestrator | 00:03:07.791 STDOUT terraform:  + created = (known after apply) 2025-07-29 00:03:07.791679 | orchestrator | 00:03:07.791 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-29 00:03:07.791716 | orchestrator | 00:03:07.791 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-29 00:03:07.791759 | orchestrator | 00:03:07.791 STDOUT terraform:  + force_delete = false 2025-07-29 00:03:07.791803 | orchestrator | 00:03:07.791 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-29 00:03:07.791846 | orchestrator | 00:03:07.791 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.791888 | orchestrator | 00:03:07.791 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.791932 | orchestrator | 00:03:07.791 STDOUT terraform:  + image_name = (known after apply) 2025-07-29 00:03:07.791964 | orchestrator | 00:03:07.791 STDOUT terraform:  + key_pair = "testbed" 2025-07-29 00:03:07.792002 | orchestrator | 00:03:07.791 STDOUT terraform:  + name = "testbed-node-4" 2025-07-29 00:03:07.792034 | orchestrator | 00:03:07.792 STDOUT terraform:  + power_state = "active" 2025-07-29 00:03:07.792076 | orchestrator | 00:03:07.792 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.792117 | orchestrator | 00:03:07.792 STDOUT terraform:  + security_groups = (known after apply) 2025-07-29 00:03:07.792147 | orchestrator | 00:03:07.792 STDOUT terraform:  + stop_before_destroy = false 2025-07-29 00:03:07.792189 | orchestrator | 00:03:07.792 STDOUT terraform:  + updated = (known after apply) 2025-07-29 00:03:07.792251 | orchestrator | 00:03:07.792 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-29 00:03:07.792276 | orchestrator | 00:03:07.792 STDOUT terraform:  + block_device { 2025-07-29 00:03:07.792307 | orchestrator | 00:03:07.792 STDOUT terraform:  + boot_index = 0 2025-07-29 00:03:07.792341 | orchestrator | 00:03:07.792 STDOUT terraform:  + delete_on_termination = false 2025-07-29 00:03:07.792377 | orchestrator | 00:03:07.792 STDOUT terraform:  + destination_type = "volume" 2025-07-29 00:03:07.792413 | orchestrator | 00:03:07.792 STDOUT terraform:  + multiattach = false 2025-07-29 00:03:07.792451 | orchestrator | 00:03:07.792 STDOUT terraform:  + source_type = "volume" 2025-07-29 00:03:07.792500 | orchestrator | 00:03:07.792 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.792522 | orchestrator | 00:03:07.792 STDOUT terraform:  } 2025-07-29 00:03:07.792545 | orchestrator | 00:03:07.792 STDOUT terraform:  + network { 2025-07-29 00:03:07.792574 | orchestrator | 00:03:07.792 STDOUT terraform:  + access_network = false 2025-07-29 00:03:07.792612 | orchestrator | 00:03:07.792 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-29 00:03:07.792650 | orchestrator | 00:03:07.792 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-29 00:03:07.792687 | orchestrator | 00:03:07.792 STDOUT terraform:  + mac = (known after apply) 2025-07-29 00:03:07.792730 | orchestrator | 00:03:07.792 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.792892 | orchestrator | 00:03:07.792 STDOUT terraform:  + port = (known after apply) 2025-07-29 00:03:07.792966 | orchestrator | 00:03:07.792 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.799743 | orchestrator | 00:03:07.799 STDOUT terraform:  } 2025-07-29 00:03:07.799838 | orchestrator | 00:03:07.799 STDOUT terraform:  } 2025-07-29 00:03:07.799897 | orchestrator | 00:03:07.799 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-29 00:03:07.799947 | orchestrator | 00:03:07.799 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-29 00:03:07.799997 | orchestrator | 00:03:07.799 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-29 00:03:07.800042 | orchestrator | 00:03:07.800 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-29 00:03:07.800084 | orchestrator | 00:03:07.800 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-29 00:03:07.800129 | orchestrator | 00:03:07.800 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.800162 | orchestrator | 00:03:07.800 STDOUT terraform:  + availability_zone = "nova" 2025-07-29 00:03:07.800189 | orchestrator | 00:03:07.800 STDOUT terraform:  + config_drive = true 2025-07-29 00:03:07.800231 | orchestrator | 00:03:07.800 STDOUT terraform:  + created = (known after apply) 2025-07-29 00:03:07.800272 | orchestrator | 00:03:07.800 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-29 00:03:07.800309 | orchestrator | 00:03:07.800 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-29 00:03:07.800340 | orchestrator | 00:03:07.800 STDOUT terraform:  + force_delete = false 2025-07-29 00:03:07.800383 | orchestrator | 00:03:07.800 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-29 00:03:07.800426 | orchestrator | 00:03:07.800 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.800467 | orchestrator | 00:03:07.800 STDOUT terraform:  + image_id = (known after apply) 2025-07-29 00:03:07.800508 | orchestrator | 00:03:07.800 STDOUT terraform:  + image_name = (known after apply) 2025-07-29 00:03:07.800540 | orchestrator | 00:03:07.800 STDOUT terraform:  + key_pair = "testbed" 2025-07-29 00:03:07.800577 | orchestrator | 00:03:07.800 STDOUT terraform:  + name = "testbed-node-5" 2025-07-29 00:03:07.800607 | orchestrator | 00:03:07.800 STDOUT terraform:  + power_state = "active" 2025-07-29 00:03:07.800650 | orchestrator | 00:03:07.800 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.800690 | orchestrator | 00:03:07.800 STDOUT terraform:  + security_groups = (known after apply) 2025-07-29 00:03:07.800721 | orchestrator | 00:03:07.800 STDOUT terraform:  + stop_before_destroy = false 2025-07-29 00:03:07.800773 | orchestrator | 00:03:07.800 STDOUT terraform:  + updated = (known after apply) 2025-07-29 00:03:07.800832 | orchestrator | 00:03:07.800 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-29 00:03:07.800858 | orchestrator | 00:03:07.800 STDOUT terraform:  + block_device { 2025-07-29 00:03:07.800905 | orchestrator | 00:03:07.800 STDOUT terraform:  + boot_index = 0 2025-07-29 00:03:07.800940 | orchestrator | 00:03:07.800 STDOUT terraform:  + delete_on_termination = false 2025-07-29 00:03:07.800976 | orchestrator | 00:03:07.800 STDOUT terraform:  + destination_type = "volume" 2025-07-29 00:03:07.801010 | orchestrator | 00:03:07.800 STDOUT terraform:  + multiattach = false 2025-07-29 00:03:07.801046 | orchestrator | 00:03:07.801 STDOUT terraform:  + source_type = "volume" 2025-07-29 00:03:07.801091 | orchestrator | 00:03:07.801 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.801111 | orchestrator | 00:03:07.801 STDOUT terraform:  } 2025-07-29 00:03:07.801131 | orchestrator | 00:03:07.801 STDOUT terraform:  + network { 2025-07-29 00:03:07.801159 | orchestrator | 00:03:07.801 STDOUT terraform:  + access_network = false 2025-07-29 00:03:07.801194 | orchestrator | 00:03:07.801 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-29 00:03:07.801230 | orchestrator | 00:03:07.801 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-29 00:03:07.801268 | orchestrator | 00:03:07.801 STDOUT terraform:  + mac = (known after apply) 2025-07-29 00:03:07.801305 | orchestrator | 00:03:07.801 STDOUT terraform:  + name = (known after apply) 2025-07-29 00:03:07.801348 | orchestrator | 00:03:07.801 STDOUT terraform:  + port = (known after apply) 2025-07-29 00:03:07.801386 | orchestrator | 00:03:07.801 STDOUT terraform:  + uuid = (known after apply) 2025-07-29 00:03:07.801407 | orchestrator | 00:03:07.801 STDOUT terraform:  } 2025-07-29 00:03:07.801426 | orchestrator | 00:03:07.801 STDOUT terraform:  } 2025-07-29 00:03:07.801466 | orchestrator | 00:03:07.801 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-29 00:03:07.801506 | orchestrator | 00:03:07.801 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-29 00:03:07.801540 | orchestrator | 00:03:07.801 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-29 00:03:07.801574 | orchestrator | 00:03:07.801 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.801600 | orchestrator | 00:03:07.801 STDOUT terraform:  + name = "testbed" 2025-07-29 00:03:07.801630 | orchestrator | 00:03:07.801 STDOUT terraform:  + private_key = (sensitive value) 2025-07-29 00:03:07.801663 | orchestrator | 00:03:07.801 STDOUT terraform:  + public_key = (known after apply) 2025-07-29 00:03:07.801698 | orchestrator | 00:03:07.801 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.801735 | orchestrator | 00:03:07.801 STDOUT terraform:  + user_id = (known after apply) 2025-07-29 00:03:07.801785 | orchestrator | 00:03:07.801 STDOUT terraform:  } 2025-07-29 00:03:07.801842 | orchestrator | 00:03:07.801 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-29 00:03:07.801903 | orchestrator | 00:03:07.801 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.801940 | orchestrator | 00:03:07.801 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.801974 | orchestrator | 00:03:07.801 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.802189 | orchestrator | 00:03:07.802 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.802230 | orchestrator | 00:03:07.802 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.802265 | orchestrator | 00:03:07.802 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.802288 | orchestrator | 00:03:07.802 STDOUT terraform:  } 2025-07-29 00:03:07.802347 | orchestrator | 00:03:07.802 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-29 00:03:07.802401 | orchestrator | 00:03:07.802 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.802437 | orchestrator | 00:03:07.802 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.802472 | orchestrator | 00:03:07.802 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.802506 | orchestrator | 00:03:07.802 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.802541 | orchestrator | 00:03:07.802 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.802591 | orchestrator | 00:03:07.802 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.802614 | orchestrator | 00:03:07.802 STDOUT terraform:  } 2025-07-29 00:03:07.802679 | orchestrator | 00:03:07.802 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-29 00:03:07.802733 | orchestrator | 00:03:07.802 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.802783 | orchestrator | 00:03:07.802 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.802819 | orchestrator | 00:03:07.802 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.803455 | orchestrator | 00:03:07.802 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.803506 | orchestrator | 00:03:07.803 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.803542 | orchestrator | 00:03:07.803 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.803566 | orchestrator | 00:03:07.803 STDOUT terraform:  } 2025-07-29 00:03:07.803621 | orchestrator | 00:03:07.803 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-29 00:03:07.803673 | orchestrator | 00:03:07.803 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.803709 | orchestrator | 00:03:07.803 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.803743 | orchestrator | 00:03:07.803 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.803812 | orchestrator | 00:03:07.803 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.803850 | orchestrator | 00:03:07.803 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.803886 | orchestrator | 00:03:07.803 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.803907 | orchestrator | 00:03:07.803 STDOUT terraform:  } 2025-07-29 00:03:07.803965 | orchestrator | 00:03:07.803 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-29 00:03:07.804030 | orchestrator | 00:03:07.803 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.804067 | orchestrator | 00:03:07.804 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.804484 | orchestrator | 00:03:07.804 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.805808 | orchestrator | 00:03:07.805 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.805852 | orchestrator | 00:03:07.805 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.805894 | orchestrator | 00:03:07.805 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.805916 | orchestrator | 00:03:07.805 STDOUT terraform:  } 2025-07-29 00:03:07.805973 | orchestrator | 00:03:07.805 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-29 00:03:07.806091 | orchestrator | 00:03:07.805 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.806130 | orchestrator | 00:03:07.806 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.806167 | orchestrator | 00:03:07.806 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.806205 | orchestrator | 00:03:07.806 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.806241 | orchestrator | 00:03:07.806 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.806276 | orchestrator | 00:03:07.806 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.806296 | orchestrator | 00:03:07.806 STDOUT terraform:  } 2025-07-29 00:03:07.806351 | orchestrator | 00:03:07.806 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-29 00:03:07.806456 | orchestrator | 00:03:07.806 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.806492 | orchestrator | 00:03:07.806 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.806528 | orchestrator | 00:03:07.806 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.806568 | orchestrator | 00:03:07.806 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.806603 | orchestrator | 00:03:07.806 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.806638 | orchestrator | 00:03:07.806 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.806661 | orchestrator | 00:03:07.806 STDOUT terraform:  } 2025-07-29 00:03:07.806720 | orchestrator | 00:03:07.806 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-29 00:03:07.806806 | orchestrator | 00:03:07.806 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.806899 | orchestrator | 00:03:07.806 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.806956 | orchestrator | 00:03:07.806 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.806994 | orchestrator | 00:03:07.806 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.807028 | orchestrator | 00:03:07.807 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.807071 | orchestrator | 00:03:07.807 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.807092 | orchestrator | 00:03:07.807 STDOUT terraform:  } 2025-07-29 00:03:07.807146 | orchestrator | 00:03:07.807 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-29 00:03:07.807201 | orchestrator | 00:03:07.807 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-29 00:03:07.807267 | orchestrator | 00:03:07.807 STDOUT terraform:  + device = (known after apply) 2025-07-29 00:03:07.807305 | orchestrator | 00:03:07.807 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.807340 | orchestrator | 00:03:07.807 STDOUT terraform:  + instance_id = (known after apply) 2025-07-29 00:03:07.807374 | orchestrator | 00:03:07.807 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.807408 | orchestrator | 00:03:07.807 STDOUT terraform:  + volume_id = (known after apply) 2025-07-29 00:03:07.807428 | orchestrator | 00:03:07.807 STDOUT terraform:  } 2025-07-29 00:03:07.807493 | orchestrator | 00:03:07.807 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-29 00:03:07.807556 | orchestrator | 00:03:07.807 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-29 00:03:07.807590 | orchestrator | 00:03:07.807 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-29 00:03:07.807624 | orchestrator | 00:03:07.807 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-29 00:03:07.807658 | orchestrator | 00:03:07.807 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.807691 | orchestrator | 00:03:07.807 STDOUT terraform:  + port_id = (known after apply) 2025-07-29 00:03:07.807725 | orchestrator | 00:03:07.807 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.807760 | orchestrator | 00:03:07.807 STDOUT terraform:  } 2025-07-29 00:03:07.807815 | orchestrator | 00:03:07.807 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-29 00:03:07.807869 | orchestrator | 00:03:07.807 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-29 00:03:07.807900 | orchestrator | 00:03:07.807 STDOUT terraform:  + address = (known after apply) 2025-07-29 00:03:07.807931 | orchestrator | 00:03:07.807 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.807962 | orchestrator | 00:03:07.807 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-29 00:03:07.807995 | orchestrator | 00:03:07.807 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.808028 | orchestrator | 00:03:07.808 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-29 00:03:07.808059 | orchestrator | 00:03:07.808 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.808087 | orchestrator | 00:03:07.808 STDOUT terraform:  + pool = "public" 2025-07-29 00:03:07.808118 | orchestrator | 00:03:07.808 STDOUT terraform:  + port_id = (known after apply) 2025-07-29 00:03:07.808148 | orchestrator | 00:03:07.808 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.808178 | orchestrator | 00:03:07.808 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.808215 | orchestrator | 00:03:07.808 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.808235 | orchestrator | 00:03:07.808 STDOUT terraform:  } 2025-07-29 00:03:07.808286 | orchestrator | 00:03:07.808 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-29 00:03:07.808337 | orchestrator | 00:03:07.808 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-29 00:03:07.808379 | orchestrator | 00:03:07.808 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.808422 | orchestrator | 00:03:07.808 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.808450 | orchestrator | 00:03:07.808 STDOUT terraform:  + availability_zone_hints = [ 2025-07-29 00:03:07.808471 | orchestrator | 00:03:07.808 STDOUT terraform:  + "nova", 2025-07-29 00:03:07.808490 | orchestrator | 00:03:07.808 STDOUT terraform:  ] 2025-07-29 00:03:07.808533 | orchestrator | 00:03:07.808 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-29 00:03:07.808574 | orchestrator | 00:03:07.808 STDOUT terraform:  + external = (known after apply) 2025-07-29 00:03:07.808616 | orchestrator | 00:03:07.808 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.808658 | orchestrator | 00:03:07.808 STDOUT terraform:  + mtu = (known after apply) 2025-07-29 00:03:07.808701 | orchestrator | 00:03:07.808 STDOUT terraform:  + name = "net-testbed-management" 2025-07-29 00:03:07.808741 | orchestrator | 00:03:07.808 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.808801 | orchestrator | 00:03:07.808 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.808846 | orchestrator | 00:03:07.808 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.808891 | orchestrator | 00:03:07.808 STDOUT terraform:  + shared = (known after apply) 2025-07-29 00:03:07.808932 | orchestrator | 00:03:07.808 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.808973 | orchestrator | 00:03:07.808 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-29 00:03:07.809004 | orchestrator | 00:03:07.808 STDOUT terraform:  + segments (known after apply) 2025-07-29 00:03:07.809023 | orchestrator | 00:03:07.809 STDOUT terraform:  } 2025-07-29 00:03:07.809073 | orchestrator | 00:03:07.809 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-29 00:03:07.809124 | orchestrator | 00:03:07.809 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-29 00:03:07.809165 | orchestrator | 00:03:07.809 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.809206 | orchestrator | 00:03:07.809 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-29 00:03:07.809247 | orchestrator | 00:03:07.809 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-29 00:03:07.809289 | orchestrator | 00:03:07.809 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.809330 | orchestrator | 00:03:07.809 STDOUT terraform:  + device_id = (known after apply) 2025-07-29 00:03:07.809375 | orchestrator | 00:03:07.809 STDOUT terraform:  + device_owner = (known after apply) 2025-07-29 00:03:07.809416 | orchestrator | 00:03:07.809 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-29 00:03:07.809458 | orchestrator | 00:03:07.809 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.809501 | orchestrator | 00:03:07.809 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.809542 | orchestrator | 00:03:07.809 STDOUT terraform:  + mac_address = (known after apply) 2025-07-29 00:03:07.809583 | orchestrator | 00:03:07.809 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.809631 | orchestrator | 00:03:07.809 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.809674 | orchestrator | 00:03:07.809 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.809730 | orchestrator | 00:03:07.809 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.809817 | orchestrator | 00:03:07.809 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-29 00:03:07.809864 | orchestrator | 00:03:07.809 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.809891 | orchestrator | 00:03:07.809 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.809928 | orchestrator | 00:03:07.809 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-29 00:03:07.809948 | orchestrator | 00:03:07.809 STDOUT terraform:  } 2025-07-29 00:03:07.809974 | orchestrator | 00:03:07.809 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.810043 | orchestrator | 00:03:07.809 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-29 00:03:07.810067 | orchestrator | 00:03:07.810 STDOUT terraform:  } 2025-07-29 00:03:07.810113 | orchestrator | 00:03:07.810 STDOUT terraform:  + binding (known after apply) 2025-07-29 00:03:07.810138 | orchestrator | 00:03:07.810 STDOUT terraform:  + fixed_ip { 2025-07-29 00:03:07.810961 | orchestrator | 00:03:07.810 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-29 00:03:07.811027 | orchestrator | 00:03:07.810 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.811050 | orchestrator | 00:03:07.811 STDOUT terraform:  } 2025-07-29 00:03:07.811070 | orchestrator | 00:03:07.811 STDOUT terraform:  } 2025-07-29 00:03:07.811123 | orchestrator | 00:03:07.811 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-29 00:03:07.811173 | orchestrator | 00:03:07.811 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-29 00:03:07.811215 | orchestrator | 00:03:07.811 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.811257 | orchestrator | 00:03:07.811 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-29 00:03:07.811298 | orchestrator | 00:03:07.811 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-29 00:03:07.811340 | orchestrator | 00:03:07.811 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.811388 | orchestrator | 00:03:07.811 STDOUT terraform:  + device_id = (known after apply) 2025-07-29 00:03:07.811441 | orchestrator | 00:03:07.811 STDOUT terraform:  + device_owner = (known after apply) 2025-07-29 00:03:07.811482 | orchestrator | 00:03:07.811 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-29 00:03:07.811523 | orchestrator | 00:03:07.811 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.811566 | orchestrator | 00:03:07.811 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.811611 | orchestrator | 00:03:07.811 STDOUT terraform:  + mac_address = (known after apply) 2025-07-29 00:03:07.811653 | orchestrator | 00:03:07.811 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.811693 | orchestrator | 00:03:07.811 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.811735 | orchestrator | 00:03:07.811 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.811793 | orchestrator | 00:03:07.811 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.811840 | orchestrator | 00:03:07.811 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-29 00:03:07.811882 | orchestrator | 00:03:07.811 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.811908 | orchestrator | 00:03:07.811 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.811944 | orchestrator | 00:03:07.811 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-29 00:03:07.811976 | orchestrator | 00:03:07.811 STDOUT terraform:  } 2025-07-29 00:03:07.812004 | orchestrator | 00:03:07.811 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.812040 | orchestrator | 00:03:07.812 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-29 00:03:07.812061 | orchestrator | 00:03:07.812 STDOUT terraform:  } 2025-07-29 00:03:07.812088 | orchestrator | 00:03:07.812 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.812123 | orchestrator | 00:03:07.812 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-29 00:03:07.812144 | orchestrator | 00:03:07.812 STDOUT terraform:  } 2025-07-29 00:03:07.812170 | orchestrator | 00:03:07.812 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.812204 | orchestrator | 00:03:07.812 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-29 00:03:07.812225 | orchestrator | 00:03:07.812 STDOUT terraform:  } 2025-07-29 00:03:07.812256 | orchestrator | 00:03:07.812 STDOUT terraform:  + binding (known after apply) 2025-07-29 00:03:07.812277 | orchestrator | 00:03:07.812 STDOUT terraform:  + fixed_ip { 2025-07-29 00:03:07.812308 | orchestrator | 00:03:07.812 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-29 00:03:07.812344 | orchestrator | 00:03:07.812 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.812364 | orchestrator | 00:03:07.812 STDOUT terraform:  } 2025-07-29 00:03:07.812384 | orchestrator | 00:03:07.812 STDOUT terraform:  } 2025-07-29 00:03:07.812435 | orchestrator | 00:03:07.812 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-29 00:03:07.812487 | orchestrator | 00:03:07.812 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-29 00:03:07.812535 | orchestrator | 00:03:07.812 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.812577 | orchestrator | 00:03:07.812 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-29 00:03:07.812618 | orchestrator | 00:03:07.812 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-29 00:03:07.812662 | orchestrator | 00:03:07.812 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.812704 | orchestrator | 00:03:07.812 STDOUT terraform:  + device_id = (known after apply) 2025-07-29 00:03:07.812779 | orchestrator | 00:03:07.812 STDOUT terraform:  + device_owner = (known after apply) 2025-07-29 00:03:07.812824 | orchestrator | 00:03:07.812 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-29 00:03:07.812866 | orchestrator | 00:03:07.812 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.812908 | orchestrator | 00:03:07.812 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.812951 | orchestrator | 00:03:07.812 STDOUT terraform:  + mac_address = (known after apply) 2025-07-29 00:03:07.812992 | orchestrator | 00:03:07.812 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.813033 | orchestrator | 00:03:07.813 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.813074 | orchestrator | 00:03:07.813 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.813117 | orchestrator | 00:03:07.813 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.813159 | orchestrator | 00:03:07.813 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-29 00:03:07.813201 | orchestrator | 00:03:07.813 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.813227 | orchestrator | 00:03:07.813 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.813262 | orchestrator | 00:03:07.813 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-29 00:03:07.813282 | orchestrator | 00:03:07.813 STDOUT terraform:  } 2025-07-29 00:03:07.813310 | orchestrator | 00:03:07.813 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.813347 | orchestrator | 00:03:07.813 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-29 00:03:07.813367 | orchestrator | 00:03:07.813 STDOUT terraform:  } 2025-07-29 00:03:07.813393 | orchestrator | 00:03:07.813 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.813427 | orchestrator | 00:03:07.813 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-29 00:03:07.813447 | orchestrator | 00:03:07.813 STDOUT terraform:  } 2025-07-29 00:03:07.813473 | orchestrator | 00:03:07.813 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.813507 | orchestrator | 00:03:07.813 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-29 00:03:07.818909 | orchestrator | 00:03:07.813 STDOUT terraform:  } 2025-07-29 00:03:07.818921 | orchestrator | 00:03:07.813 STDOUT terraform:  + binding (known after apply) 2025-07-29 00:03:07.818926 | orchestrator | 00:03:07.813 STDOUT terraform:  + fixed_ip { 2025-07-29 00:03:07.818935 | orchestrator | 00:03:07.813 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-29 00:03:07.818939 | orchestrator | 00:03:07.813 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.818943 | orchestrator | 00:03:07.813 STDOUT terraform:  } 2025-07-29 00:03:07.818947 | orchestrator | 00:03:07.813 STDOUT terraform:  } 2025-07-29 00:03:07.818951 | orchestrator | 00:03:07.813 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-29 00:03:07.818955 | orchestrator | 00:03:07.813 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-29 00:03:07.818959 | orchestrator | 00:03:07.813 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.818963 | orchestrator | 00:03:07.814 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-29 00:03:07.818967 | orchestrator | 00:03:07.814 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-29 00:03:07.818970 | orchestrator | 00:03:07.814 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.818974 | orchestrator | 00:03:07.814 STDOUT terraform:  + device_id = (known after apply) 2025-07-29 00:03:07.818978 | orchestrator | 00:03:07.814 STDOUT terraform:  + device_owner = (known after apply) 2025-07-29 00:03:07.818982 | orchestrator | 00:03:07.814 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-29 00:03:07.818986 | orchestrator | 00:03:07.814 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.818990 | orchestrator | 00:03:07.814 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.818994 | orchestrator | 00:03:07.814 STDOUT terraform:  + mac_address = (known after apply) 2025-07-29 00:03:07.818998 | orchestrator | 00:03:07.814 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.819001 | orchestrator | 00:03:07.814 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.819005 | orchestrator | 00:03:07.814 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.819009 | orchestrator | 00:03:07.814 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.819012 | orchestrator | 00:03:07.814 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-29 00:03:07.819016 | orchestrator | 00:03:07.814 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.819020 | orchestrator | 00:03:07.814 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819024 | orchestrator | 00:03:07.814 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-29 00:03:07.819028 | orchestrator | 00:03:07.814 STDOUT terraform:  } 2025-07-29 00:03:07.819031 | orchestrator | 00:03:07.814 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819035 | orchestrator | 00:03:07.814 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-29 00:03:07.819039 | orchestrator | 00:03:07.814 STDOUT terraform:  } 2025-07-29 00:03:07.819043 | orchestrator | 00:03:07.814 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819047 | orchestrator | 00:03:07.814 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-29 00:03:07.819053 | orchestrator | 00:03:07.814 STDOUT terraform:  } 2025-07-29 00:03:07.819057 | orchestrator | 00:03:07.814 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819061 | orchestrator | 00:03:07.814 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-29 00:03:07.819065 | orchestrator | 00:03:07.814 STDOUT terraform:  } 2025-07-29 00:03:07.819068 | orchestrator | 00:03:07.814 STDOUT terraform:  + binding (known after apply) 2025-07-29 00:03:07.819072 | orchestrator | 00:03:07.814 STDOUT terraform:  + fixed_ip { 2025-07-29 00:03:07.819081 | orchestrator | 00:03:07.814 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-29 00:03:07.819086 | orchestrator | 00:03:07.814 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.819089 | orchestrator | 00:03:07.814 STDOUT terraform:  } 2025-07-29 00:03:07.819093 | orchestrator | 00:03:07.814 STDOUT terraform:  } 2025-07-29 00:03:07.819097 | orchestrator | 00:03:07.814 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-29 00:03:07.819101 | orchestrator | 00:03:07.814 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-29 00:03:07.819105 | orchestrator | 00:03:07.814 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.819109 | orchestrator | 00:03:07.814 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-29 00:03:07.819113 | orchestrator | 00:03:07.814 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-29 00:03:07.819116 | orchestrator | 00:03:07.814 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.819120 | orchestrator | 00:03:07.815 STDOUT terraform:  + device_id = (known after apply) 2025-07-29 00:03:07.819124 | orchestrator | 00:03:07.815 STDOUT terraform:  + device_owner = (known after apply) 2025-07-29 00:03:07.819131 | orchestrator | 00:03:07.815 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-29 00:03:07.819135 | orchestrator | 00:03:07.815 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.819141 | orchestrator | 00:03:07.815 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.819145 | orchestrator | 00:03:07.815 STDOUT terraform:  + mac_address = (known after apply) 2025-07-29 00:03:07.819149 | orchestrator | 00:03:07.815 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.819153 | orchestrator | 00:03:07.815 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.819156 | orchestrator | 00:03:07.815 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.819160 | orchestrator | 00:03:07.815 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.819164 | orchestrator | 00:03:07.815 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-29 00:03:07.819168 | orchestrator | 00:03:07.815 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.819171 | orchestrator | 00:03:07.815 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819175 | orchestrator | 00:03:07.815 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-29 00:03:07.819182 | orchestrator | 00:03:07.815 STDOUT terraform:  } 2025-07-29 00:03:07.819186 | orchestrator | 00:03:07.815 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819189 | orchestrator | 00:03:07.815 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-29 00:03:07.819193 | orchestrator | 00:03:07.815 STDOUT terraform:  } 2025-07-29 00:03:07.819197 | orchestrator | 00:03:07.815 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819201 | orchestrator | 00:03:07.815 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-29 00:03:07.819204 | orchestrator | 00:03:07.815 STDOUT terraform:  } 2025-07-29 00:03:07.819208 | orchestrator | 00:03:07.815 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819212 | orchestrator | 00:03:07.815 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-29 00:03:07.819216 | orchestrator | 00:03:07.815 STDOUT terraform:  } 2025-07-29 00:03:07.819219 | orchestrator | 00:03:07.815 STDOUT terraform:  + binding (known after apply) 2025-07-29 00:03:07.819223 | orchestrator | 00:03:07.815 STDOUT terraform:  + fixed_ip { 2025-07-29 00:03:07.819227 | orchestrator | 00:03:07.815 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-29 00:03:07.819231 | orchestrator | 00:03:07.815 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.819235 | orchestrator | 00:03:07.815 STDOUT terraform:  } 2025-07-29 00:03:07.819241 | orchestrator | 00:03:07.815 STDOUT terraform:  } 2025-07-29 00:03:07.819245 | orchestrator | 00:03:07.815 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-29 00:03:07.819249 | orchestrator | 00:03:07.815 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-29 00:03:07.819253 | orchestrator | 00:03:07.815 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.819257 | orchestrator | 00:03:07.815 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-29 00:03:07.819260 | orchestrator | 00:03:07.815 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-29 00:03:07.819264 | orchestrator | 00:03:07.815 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.819268 | orchestrator | 00:03:07.815 STDOUT terraform:  + device_id = (known after apply) 2025-07-29 00:03:07.819272 | orchestrator | 00:03:07.815 STDOUT terraform:  + device_owner = (known after apply) 2025-07-29 00:03:07.819276 | orchestrator | 00:03:07.815 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-29 00:03:07.819279 | orchestrator | 00:03:07.815 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.819283 | orchestrator | 00:03:07.815 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.819287 | orchestrator | 00:03:07.816 STDOUT terraform:  + mac_address = (known after apply) 2025-07-29 00:03:07.819291 | orchestrator | 00:03:07.816 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.819297 | orchestrator | 00:03:07.816 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.819303 | orchestrator | 00:03:07.816 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.819307 | orchestrator | 00:03:07.816 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.819311 | orchestrator | 00:03:07.816 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-29 00:03:07.819315 | orchestrator | 00:03:07.816 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.819318 | orchestrator | 00:03:07.816 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819322 | orchestrator | 00:03:07.816 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-29 00:03:07.819326 | orchestrator | 00:03:07.816 STDOUT terraform:  } 2025-07-29 00:03:07.819330 | orchestrator | 00:03:07.816 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819334 | orchestrator | 00:03:07.816 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-29 00:03:07.819337 | orchestrator | 00:03:07.816 STDOUT terraform:  } 2025-07-29 00:03:07.819341 | orchestrator | 00:03:07.816 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819345 | orchestrator | 00:03:07.816 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-29 00:03:07.819349 | orchestrator | 00:03:07.816 STDOUT terraform:  } 2025-07-29 00:03:07.819352 | orchestrator | 00:03:07.816 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819356 | orchestrator | 00:03:07.816 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-29 00:03:07.819360 | orchestrator | 00:03:07.816 STDOUT terraform:  } 2025-07-29 00:03:07.819364 | orchestrator | 00:03:07.816 STDOUT terraform:  + binding (known after apply) 2025-07-29 00:03:07.819368 | orchestrator | 00:03:07.816 STDOUT terraform:  + fixed_ip { 2025-07-29 00:03:07.819371 | orchestrator | 00:03:07.816 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-29 00:03:07.819375 | orchestrator | 00:03:07.816 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.819379 | orchestrator | 00:03:07.816 STDOUT terraform:  } 2025-07-29 00:03:07.819383 | orchestrator | 00:03:07.816 STDOUT terraform:  } 2025-07-29 00:03:07.819386 | orchestrator | 00:03:07.816 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-29 00:03:07.819390 | orchestrator | 00:03:07.816 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-29 00:03:07.819397 | orchestrator | 00:03:07.816 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.819400 | orchestrator | 00:03:07.816 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-29 00:03:07.819404 | orchestrator | 00:03:07.816 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-29 00:03:07.819408 | orchestrator | 00:03:07.816 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.819412 | orchestrator | 00:03:07.816 STDOUT terraform:  + device_id = (known after apply) 2025-07-29 00:03:07.819416 | orchestrator | 00:03:07.816 STDOUT terraform:  + device_owner = (known after apply) 2025-07-29 00:03:07.819419 | orchestrator | 00:03:07.816 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-29 00:03:07.819426 | orchestrator | 00:03:07.816 STDOUT terraform:  + dns_name = (known after apply) 2025-07-29 00:03:07.819430 | orchestrator | 00:03:07.816 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.819434 | orchestrator | 00:03:07.816 STDOUT terraform:  + mac_address = (known after apply) 2025-07-29 00:03:07.819437 | orchestrator | 00:03:07.816 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.819441 | orchestrator | 00:03:07.816 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-29 00:03:07.819445 | orchestrator | 00:03:07.816 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-29 00:03:07.819449 | orchestrator | 00:03:07.816 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.819452 | orchestrator | 00:03:07.817 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-29 00:03:07.819456 | orchestrator | 00:03:07.817 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.819460 | orchestrator | 00:03:07.817 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819464 | orchestrator | 00:03:07.817 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-29 00:03:07.819467 | orchestrator | 00:03:07.817 STDOUT terraform:  } 2025-07-29 00:03:07.819471 | orchestrator | 00:03:07.817 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819475 | orchestrator | 00:03:07.817 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-29 00:03:07.819479 | orchestrator | 00:03:07.817 STDOUT terraform:  } 2025-07-29 00:03:07.819483 | orchestrator | 00:03:07.817 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819486 | orchestrator | 00:03:07.817 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-29 00:03:07.819490 | orchestrator | 00:03:07.817 STDOUT terraform:  } 2025-07-29 00:03:07.819494 | orchestrator | 00:03:07.817 STDOUT terraform:  + allowed_address_pairs { 2025-07-29 00:03:07.819498 | orchestrator | 00:03:07.817 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-29 00:03:07.819501 | orchestrator | 00:03:07.817 STDOUT terraform:  } 2025-07-29 00:03:07.819505 | orchestrator | 00:03:07.817 STDOUT terraform:  + binding (known after apply) 2025-07-29 00:03:07.819509 | orchestrator | 00:03:07.817 STDOUT terraform:  + fixed_ip { 2025-07-29 00:03:07.819513 | orchestrator | 00:03:07.817 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-29 00:03:07.819516 | orchestrator | 00:03:07.817 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.819520 | orchestrator | 00:03:07.817 STDOUT terraform:  } 2025-07-29 00:03:07.819524 | orchestrator | 00:03:07.817 STDOUT terraform:  } 2025-07-29 00:03:07.819528 | orchestrator | 00:03:07.817 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-29 00:03:07.819532 | orchestrator | 00:03:07.817 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-29 00:03:07.819536 | orchestrator | 00:03:07.817 STDOUT terraform:  + force_destroy = false 2025-07-29 00:03:07.819539 | orchestrator | 00:03:07.817 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.819546 | orchestrator | 00:03:07.817 STDOUT terraform:  + port_id = (known after apply) 2025-07-29 00:03:07.819552 | orchestrator | 00:03:07.817 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.819556 | orchestrator | 00:03:07.817 STDOUT terraform:  + router_id = (known after apply) 2025-07-29 00:03:07.819560 | orchestrator | 00:03:07.817 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-29 00:03:07.819563 | orchestrator | 00:03:07.817 STDOUT terraform:  } 2025-07-29 00:03:07.819567 | orchestrator | 00:03:07.817 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-29 00:03:07.819571 | orchestrator | 00:03:07.817 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-29 00:03:07.819575 | orchestrator | 00:03:07.817 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-29 00:03:07.819579 | orchestrator | 00:03:07.817 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.819582 | orchestrator | 00:03:07.817 STDOUT terraform:  + availability_zone_hints = [ 2025-07-29 00:03:07.819587 | orchestrator | 00:03:07.817 STDOUT terraform:  + "nova", 2025-07-29 00:03:07.819591 | orchestrator | 00:03:07.817 STDOUT terraform:  ] 2025-07-29 00:03:07.819595 | orchestrator | 00:03:07.817 STDOUT terraform:  + distributed = (known after apply) 2025-07-29 00:03:07.819599 | orchestrator | 00:03:07.817 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-29 00:03:07.819602 | orchestrator | 00:03:07.817 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-29 00:03:07.819611 | orchestrator | 00:03:07.817 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-29 00:03:07.819615 | orchestrator | 00:03:07.817 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.819619 | orchestrator | 00:03:07.817 STDOUT terraform:  + name = "testbed" 2025-07-29 00:03:07.819622 | orchestrator | 00:03:07.817 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.819626 | orchestrator | 00:03:07.817 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.819697 | orchestrator | 00:03:07.819 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-29 00:03:07.819722 | orchestrator | 00:03:07.819 STDOUT terraform:  } 2025-07-29 00:03:07.819805 | orchestrator | 00:03:07.819 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-29 00:03:07.819868 | orchestrator | 00:03:07.819 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-29 00:03:07.819901 | orchestrator | 00:03:07.819 STDOUT terraform:  + description = "ssh" 2025-07-29 00:03:07.820214 | orchestrator | 00:03:07.819 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.820259 | orchestrator | 00:03:07.820 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.820304 | orchestrator | 00:03:07.820 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.820336 | orchestrator | 00:03:07.820 STDOUT terraform:  + port_range_max = 22 2025-07-29 00:03:07.820366 | orchestrator | 00:03:07.820 STDOUT terraform:  + port_range_min = 22 2025-07-29 00:03:07.820402 | orchestrator | 00:03:07.820 STDOUT terraform:  + protocol = "tcp" 2025-07-29 00:03:07.820444 | orchestrator | 00:03:07.820 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.820484 | orchestrator | 00:03:07.820 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.820527 | orchestrator | 00:03:07.820 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.820563 | orchestrator | 00:03:07.820 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-29 00:03:07.820604 | orchestrator | 00:03:07.820 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.820645 | orchestrator | 00:03:07.820 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.820665 | orchestrator | 00:03:07.820 STDOUT terraform:  } 2025-07-29 00:03:07.820722 | orchestrator | 00:03:07.820 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-29 00:03:07.820807 | orchestrator | 00:03:07.820 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-29 00:03:07.820845 | orchestrator | 00:03:07.820 STDOUT terraform:  + description = "wireguard" 2025-07-29 00:03:07.820881 | orchestrator | 00:03:07.820 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.820915 | orchestrator | 00:03:07.820 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.820959 | orchestrator | 00:03:07.820 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.820992 | orchestrator | 00:03:07.820 STDOUT terraform:  + port_range_max = 51820 2025-07-29 00:03:07.821024 | orchestrator | 00:03:07.821 STDOUT terraform:  + port_range_min = 51820 2025-07-29 00:03:07.821056 | orchestrator | 00:03:07.821 STDOUT terraform:  + protocol = "udp" 2025-07-29 00:03:07.821099 | orchestrator | 00:03:07.821 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.821140 | orchestrator | 00:03:07.821 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.821389 | orchestrator | 00:03:07.821 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.821435 | orchestrator | 00:03:07.821 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-29 00:03:07.821481 | orchestrator | 00:03:07.821 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.821524 | orchestrator | 00:03:07.821 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.821545 | orchestrator | 00:03:07.821 STDOUT terraform:  } 2025-07-29 00:03:07.821605 | orchestrator | 00:03:07.821 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-29 00:03:07.821665 | orchestrator | 00:03:07.821 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-29 00:03:07.821700 | orchestrator | 00:03:07.821 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.821733 | orchestrator | 00:03:07.821 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.821794 | orchestrator | 00:03:07.821 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.821826 | orchestrator | 00:03:07.821 STDOUT terraform:  + protocol = "tcp" 2025-07-29 00:03:07.821869 | orchestrator | 00:03:07.821 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.821911 | orchestrator | 00:03:07.821 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.821954 | orchestrator | 00:03:07.821 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.821998 | orchestrator | 00:03:07.821 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-29 00:03:07.822062 | orchestrator | 00:03:07.822 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.822108 | orchestrator | 00:03:07.822 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.822128 | orchestrator | 00:03:07.822 STDOUT terraform:  } 2025-07-29 00:03:07.822186 | orchestrator | 00:03:07.822 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-29 00:03:07.822245 | orchestrator | 00:03:07.822 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-29 00:03:07.822283 | orchestrator | 00:03:07.822 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.822316 | orchestrator | 00:03:07.822 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.822558 | orchestrator | 00:03:07.822 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.822596 | orchestrator | 00:03:07.822 STDOUT terraform:  + protocol = "udp" 2025-07-29 00:03:07.822639 | orchestrator | 00:03:07.822 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.822680 | orchestrator | 00:03:07.822 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.822722 | orchestrator | 00:03:07.822 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.822776 | orchestrator | 00:03:07.822 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-29 00:03:07.822819 | orchestrator | 00:03:07.822 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.822863 | orchestrator | 00:03:07.822 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.822884 | orchestrator | 00:03:07.822 STDOUT terraform:  } 2025-07-29 00:03:07.822941 | orchestrator | 00:03:07.822 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-29 00:03:07.823000 | orchestrator | 00:03:07.822 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-29 00:03:07.823036 | orchestrator | 00:03:07.823 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.823067 | orchestrator | 00:03:07.823 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.823110 | orchestrator | 00:03:07.823 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.823141 | orchestrator | 00:03:07.823 STDOUT terraform:  + protocol = "icmp" 2025-07-29 00:03:07.823192 | orchestrator | 00:03:07.823 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.823234 | orchestrator | 00:03:07.823 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.823276 | orchestrator | 00:03:07.823 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.823316 | orchestrator | 00:03:07.823 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-29 00:03:07.823360 | orchestrator | 00:03:07.823 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.823402 | orchestrator | 00:03:07.823 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.823422 | orchestrator | 00:03:07.823 STDOUT terraform:  } 2025-07-29 00:03:07.823477 | orchestrator | 00:03:07.823 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-29 00:03:07.823533 | orchestrator | 00:03:07.823 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-29 00:03:07.823570 | orchestrator | 00:03:07.823 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.823619 | orchestrator | 00:03:07.823 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.823664 | orchestrator | 00:03:07.823 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.823697 | orchestrator | 00:03:07.823 STDOUT terraform:  + protocol = "tcp" 2025-07-29 00:03:07.823740 | orchestrator | 00:03:07.823 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.824050 | orchestrator | 00:03:07.824 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.824093 | orchestrator | 00:03:07.824 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.824128 | orchestrator | 00:03:07.824 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-29 00:03:07.824171 | orchestrator | 00:03:07.824 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.824213 | orchestrator | 00:03:07.824 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.824234 | orchestrator | 00:03:07.824 STDOUT terraform:  } 2025-07-29 00:03:07.824289 | orchestrator | 00:03:07.824 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-29 00:03:07.824347 | orchestrator | 00:03:07.824 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-29 00:03:07.824382 | orchestrator | 00:03:07.824 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.824415 | orchestrator | 00:03:07.824 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.824458 | orchestrator | 00:03:07.824 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.824489 | orchestrator | 00:03:07.824 STDOUT terraform:  + protocol = "udp" 2025-07-29 00:03:07.824531 | orchestrator | 00:03:07.824 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.824572 | orchestrator | 00:03:07.824 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.824614 | orchestrator | 00:03:07.824 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.824656 | orchestrator | 00:03:07.824 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-29 00:03:07.824697 | orchestrator | 00:03:07.824 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.824740 | orchestrator | 00:03:07.824 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.824790 | orchestrator | 00:03:07.824 STDOUT terraform:  } 2025-07-29 00:03:07.824847 | orchestrator | 00:03:07.824 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-29 00:03:07.824904 | orchestrator | 00:03:07.824 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-29 00:03:07.824939 | orchestrator | 00:03:07.824 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.824971 | orchestrator | 00:03:07.824 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.825015 | orchestrator | 00:03:07.824 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.825047 | orchestrator | 00:03:07.825 STDOUT terraform:  + protocol = "icmp" 2025-07-29 00:03:07.825092 | orchestrator | 00:03:07.825 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.825135 | orchestrator | 00:03:07.825 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.825177 | orchestrator | 00:03:07.825 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.825214 | orchestrator | 00:03:07.825 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-29 00:03:07.825255 | orchestrator | 00:03:07.825 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.825543 | orchestrator | 00:03:07.825 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.825569 | orchestrator | 00:03:07.825 STDOUT terraform:  } 2025-07-29 00:03:07.825623 | orchestrator | 00:03:07.825 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-29 00:03:07.825677 | orchestrator | 00:03:07.825 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-29 00:03:07.825709 | orchestrator | 00:03:07.825 STDOUT terraform:  + description = "vrrp" 2025-07-29 00:03:07.825743 | orchestrator | 00:03:07.825 STDOUT terraform:  + direction = "ingress" 2025-07-29 00:03:07.825788 | orchestrator | 00:03:07.825 STDOUT terraform:  + ethertype = "IPv4" 2025-07-29 00:03:07.825853 | orchestrator | 00:03:07.825 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.825885 | orchestrator | 00:03:07.825 STDOUT terraform:  + protocol = "112" 2025-07-29 00:03:07.825929 | orchestrator | 00:03:07.825 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.825971 | orchestrator | 00:03:07.825 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-29 00:03:07.826031 | orchestrator | 00:03:07.825 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-29 00:03:07.826069 | orchestrator | 00:03:07.826 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-29 00:03:07.826119 | orchestrator | 00:03:07.826 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-29 00:03:07.826160 | orchestrator | 00:03:07.826 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.826179 | orchestrator | 00:03:07.826 STDOUT terraform:  } 2025-07-29 00:03:07.826234 | orchestrator | 00:03:07.826 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-29 00:03:07.826287 | orchestrator | 00:03:07.826 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-29 00:03:07.826321 | orchestrator | 00:03:07.826 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.826363 | orchestrator | 00:03:07.826 STDOUT terraform:  + description = "management security group" 2025-07-29 00:03:07.826398 | orchestrator | 00:03:07.826 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.826627 | orchestrator | 00:03:07.826 STDOUT terraform:  + name = "testbed-management" 2025-07-29 00:03:07.826667 | orchestrator | 00:03:07.826 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.826702 | orchestrator | 00:03:07.826 STDOUT terraform:  + stateful = (known after apply) 2025-07-29 00:03:07.826738 | orchestrator | 00:03:07.826 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.826771 | orchestrator | 00:03:07.826 STDOUT terraform:  } 2025-07-29 00:03:07.826825 | orchestrator | 00:03:07.826 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-29 00:03:07.826878 | orchestrator | 00:03:07.826 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-29 00:03:07.826917 | orchestrator | 00:03:07.826 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.826952 | orchestrator | 00:03:07.826 STDOUT terraform:  + description = "node security group" 2025-07-29 00:03:07.826990 | orchestrator | 00:03:07.826 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.827033 | orchestrator | 00:03:07.827 STDOUT terraform:  + name = "testbed-node" 2025-07-29 00:03:07.827069 | orchestrator | 00:03:07.827 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.827106 | orchestrator | 00:03:07.827 STDOUT terraform:  + stateful = (known after apply) 2025-07-29 00:03:07.827143 | orchestrator | 00:03:07.827 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.827164 | orchestrator | 00:03:07.827 STDOUT terraform:  } 2025-07-29 00:03:07.827216 | orchestrator | 00:03:07.827 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-29 00:03:07.827266 | orchestrator | 00:03:07.827 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-29 00:03:07.827306 | orchestrator | 00:03:07.827 STDOUT terraform:  + all_tags = (known after apply) 2025-07-29 00:03:07.827343 | orchestrator | 00:03:07.827 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-29 00:03:07.827382 | orchestrator | 00:03:07.827 STDOUT terraform:  + dns_nameservers = [ 2025-07-29 00:03:07.827407 | orchestrator | 00:03:07.827 STDOUT terraform:  + "8.8.8.8", 2025-07-29 00:03:07.827432 | orchestrator | 00:03:07.827 STDOUT terraform:  + "9.9.9.9", 2025-07-29 00:03:07.827461 | orchestrator | 00:03:07.827 STDOUT terraform:  ] 2025-07-29 00:03:07.827489 | orchestrator | 00:03:07.827 STDOUT terraform:  + enable_dhcp = true 2025-07-29 00:03:07.827527 | orchestrator | 00:03:07.827 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-29 00:03:07.827565 | orchestrator | 00:03:07.827 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.827593 | orchestrator | 00:03:07.827 STDOUT terraform:  + ip_version = 4 2025-07-29 00:03:07.827631 | orchestrator | 00:03:07.827 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-29 00:03:07.827669 | orchestrator | 00:03:07.827 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-29 00:03:07.827712 | orchestrator | 00:03:07.827 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-29 00:03:07.827777 | orchestrator | 00:03:07.827 STDOUT terraform:  + network_id = (known after apply) 2025-07-29 00:03:07.828061 | orchestrator | 00:03:07.827 STDOUT terraform:  + no_gateway = false 2025-07-29 00:03:07.828102 | orchestrator | 00:03:07.828 STDOUT terraform:  + region = (known after apply) 2025-07-29 00:03:07.828140 | orchestrator | 00:03:07.828 STDOUT terraform:  + service_types = (known after apply) 2025-07-29 00:03:07.828176 | orchestrator | 00:03:07.828 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-29 00:03:07.828201 | orchestrator | 00:03:07.828 STDOUT terraform:  + allocation_pool { 2025-07-29 00:03:07.828232 | orchestrator | 00:03:07.828 STDOUT terraform:  + end = "192.168.31.250" 2025-07-29 00:03:07.828262 | orchestrator | 00:03:07.828 STDOUT terraform:  + start = "192.168.31.200" 2025-07-29 00:03:07.828282 | orchestrator | 00:03:07.828 STDOUT terraform:  } 2025-07-29 00:03:07.828301 | orchestrator | 00:03:07.828 STDOUT terraform:  } 2025-07-29 00:03:07.828332 | orchestrator | 00:03:07.828 STDOUT terraform:  # terraform_data.image will be created 2025-07-29 00:03:07.828361 | orchestrator | 00:03:07.828 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-29 00:03:07.828390 | orchestrator | 00:03:07.828 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.828416 | orchestrator | 00:03:07.828 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-29 00:03:07.828445 | orchestrator | 00:03:07.828 STDOUT terraform:  + output = (known after apply) 2025-07-29 00:03:07.828465 | orchestrator | 00:03:07.828 STDOUT terraform:  } 2025-07-29 00:03:07.828499 | orchestrator | 00:03:07.828 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-29 00:03:07.828536 | orchestrator | 00:03:07.828 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-29 00:03:07.828567 | orchestrator | 00:03:07.828 STDOUT terraform:  + id = (known after apply) 2025-07-29 00:03:07.828593 | orchestrator | 00:03:07.828 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-29 00:03:07.828622 | orchestrator | 00:03:07.828 STDOUT terraform:  + output = (known after apply) 2025-07-29 00:03:07.828642 | orchestrator | 00:03:07.828 STDOUT terraform:  } 2025-07-29 00:03:07.828677 | orchestrator | 00:03:07.828 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-29 00:03:07.828697 | orchestrator | 00:03:07.828 STDOUT terraform: Changes to Outputs: 2025-07-29 00:03:07.828731 | orchestrator | 00:03:07.828 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-29 00:03:07.828774 | orchestrator | 00:03:07.828 STDOUT terraform:  + private_key = (sensitive value) 2025-07-29 00:03:08.000386 | orchestrator | 00:03:08.000 STDOUT terraform: terraform_data.image: Creating... 2025-07-29 00:03:08.000469 | orchestrator | 00:03:08.000 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=56fb3f00-b9e5-5827-b8bc-33396b537f4e] 2025-07-29 00:03:08.001226 | orchestrator | 00:03:08.001 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-29 00:03:08.001784 | orchestrator | 00:03:08.001 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=dfde3c2c-e1a2-00f0-6701-cf486b116765] 2025-07-29 00:03:08.012133 | orchestrator | 00:03:08.011 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-29 00:03:08.016941 | orchestrator | 00:03:08.015 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-29 00:03:08.022082 | orchestrator | 00:03:08.020 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-29 00:03:08.022115 | orchestrator | 00:03:08.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-29 00:03:08.025980 | orchestrator | 00:03:08.025 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-29 00:03:08.026462 | orchestrator | 00:03:08.026 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-29 00:03:08.026623 | orchestrator | 00:03:08.026 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-29 00:03:08.026876 | orchestrator | 00:03:08.026 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-29 00:03:08.028553 | orchestrator | 00:03:08.028 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-29 00:03:08.035594 | orchestrator | 00:03:08.035 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-29 00:03:08.447109 | orchestrator | 00:03:08.446 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-29 00:03:08.451616 | orchestrator | 00:03:08.451 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-29 00:03:08.467027 | orchestrator | 00:03:08.466 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-29 00:03:08.471228 | orchestrator | 00:03:08.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-29 00:03:08.662937 | orchestrator | 00:03:08.543 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-07-29 00:03:08.663010 | orchestrator | 00:03:08.551 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-29 00:03:09.143699 | orchestrator | 00:03:09.143 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=d37d53ba-c424-4ca5-b5e7-c57a9ccdeabc] 2025-07-29 00:03:09.153392 | orchestrator | 00:03:09.153 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-29 00:03:11.719555 | orchestrator | 00:03:11.719 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=8027c030-960e-4a8a-9751-19a4b577f9bf] 2025-07-29 00:03:11.730377 | orchestrator | 00:03:11.730 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-29 00:03:11.734798 | orchestrator | 00:03:11.734 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=0485018f-3fe9-4f25-8b22-82588b164e9e] 2025-07-29 00:03:11.741198 | orchestrator | 00:03:11.741 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=16d1ba37-2de0-45d8-9aa4-d45f889f34a3] 2025-07-29 00:03:11.748836 | orchestrator | 00:03:11.746 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-29 00:03:11.752877 | orchestrator | 00:03:11.750 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-29 00:03:11.756640 | orchestrator | 00:03:11.756 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=6c0b0a23-d9a7-48b1-8c64-e5d37e60ce96] 2025-07-29 00:03:11.759702 | orchestrator | 00:03:11.759 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=e838f985916030b5e42201f8911c922f2e567863] 2025-07-29 00:03:11.762841 | orchestrator | 00:03:11.762 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=926c30bf-2367-4094-ad64-36ebb6a831d5] 2025-07-29 00:03:11.768428 | orchestrator | 00:03:11.768 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-29 00:03:11.787450 | orchestrator | 00:03:11.787 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=956d39d7-e001-46ef-a843-06a79e594541] 2025-07-29 00:03:11.787503 | orchestrator | 00:03:11.787 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=b5bc8f13bea77ca93df40a468208f0fe1ddcade3] 2025-07-29 00:03:11.787514 | orchestrator | 00:03:11.787 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-29 00:03:11.789953 | orchestrator | 00:03:11.789 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=82b48d1c-8775-4cb2-966c-5616edee7deb] 2025-07-29 00:03:11.797108 | orchestrator | 00:03:11.797 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-29 00:03:11.797291 | orchestrator | 00:03:11.797 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-29 00:03:11.797420 | orchestrator | 00:03:11.797 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-29 00:03:11.798401 | orchestrator | 00:03:11.798 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-29 00:03:11.813618 | orchestrator | 00:03:11.813 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=19987853-4fae-4f70-a96c-4dea60f77e24] 2025-07-29 00:03:11.826517 | orchestrator | 00:03:11.826 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=e07f6824-79e8-4182-b82e-3eac6c81ca1f] 2025-07-29 00:03:12.604090 | orchestrator | 00:03:12.603 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=e8e24187-bf52-49da-82c1-e14cc280a56b] 2025-07-29 00:03:12.789196 | orchestrator | 00:03:12.788 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=e5378651-125a-46ea-a39d-58f8ed897f5e] 2025-07-29 00:03:12.798548 | orchestrator | 00:03:12.798 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-29 00:03:15.106600 | orchestrator | 00:03:15.106 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=22f7a4df-f15d-439a-8e21-5f4e8e3b496e] 2025-07-29 00:03:15.125793 | orchestrator | 00:03:15.125 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=eb06ccbe-c35e-4a26-a267-af16063a7b02] 2025-07-29 00:03:15.242925 | orchestrator | 00:03:15.242 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=a6666655-2073-4610-b2e2-c2472d0492ce] 2025-07-29 00:03:15.263177 | orchestrator | 00:03:15.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=d19b488c-3209-4c4a-bb0a-bdcc5f002057] 2025-07-29 00:03:15.268581 | orchestrator | 00:03:15.268 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=6a38bffd-dd6f-4988-b58e-1fda355803d5] 2025-07-29 00:03:15.359399 | orchestrator | 00:03:15.359 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=95f0fd78-3ec4-4cba-8c9e-e61633db91e3] 2025-07-29 00:03:16.555658 | orchestrator | 00:03:16.555 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 4s [id=6ab2172f-4655-4633-886b-2d0a2bea68d6] 2025-07-29 00:03:16.563741 | orchestrator | 00:03:16.563 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-29 00:03:16.564483 | orchestrator | 00:03:16.564 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-29 00:03:16.565231 | orchestrator | 00:03:16.565 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-29 00:03:16.839993 | orchestrator | 00:03:16.839 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=47cb039a-cbf3-4032-95f7-ad43581a5da6] 2025-07-29 00:03:16.850142 | orchestrator | 00:03:16.849 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-29 00:03:16.852071 | orchestrator | 00:03:16.851 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-29 00:03:16.858153 | orchestrator | 00:03:16.857 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-29 00:03:16.858213 | orchestrator | 00:03:16.858 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-29 00:03:16.860552 | orchestrator | 00:03:16.860 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-29 00:03:16.867265 | orchestrator | 00:03:16.867 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-29 00:03:17.038615 | orchestrator | 00:03:17.038 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=f080a47d-0937-4733-8f6d-83213df97a4b] 2025-07-29 00:03:17.216872 | orchestrator | 00:03:17.216 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=8023b2d2-8e95-48e6-b6fa-88d1cc845b13] 2025-07-29 00:03:17.404280 | orchestrator | 00:03:17.403 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=c76762b2-9aeb-4aa9-a155-90207212a86d] 2025-07-29 00:03:17.413832 | orchestrator | 00:03:17.413 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d9a0554e-9141-4ede-b884-a62b4d5300e9] 2025-07-29 00:03:17.414157 | orchestrator | 00:03:17.413 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-29 00:03:17.416481 | orchestrator | 00:03:17.416 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-29 00:03:17.419250 | orchestrator | 00:03:17.419 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-29 00:03:17.422869 | orchestrator | 00:03:17.422 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-29 00:03:17.428455 | orchestrator | 00:03:17.428 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-29 00:03:17.429181 | orchestrator | 00:03:17.429 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-29 00:03:17.634588 | orchestrator | 00:03:17.634 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=71749751-dafe-4e11-9edd-b5f1fe26ce43] 2025-07-29 00:03:17.651522 | orchestrator | 00:03:17.651 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-29 00:03:17.680323 | orchestrator | 00:03:17.679 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=1a195fc9-5c4a-4ae4-8db6-dae063686d61] 2025-07-29 00:03:17.693222 | orchestrator | 00:03:17.692 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-29 00:03:17.834639 | orchestrator | 00:03:17.834 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=a89db6f3-3e85-4cc7-b00b-0b5eb15b83aa] 2025-07-29 00:03:17.849370 | orchestrator | 00:03:17.849 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-29 00:03:17.997258 | orchestrator | 00:03:17.996 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=1c45243d-6cfd-49f3-b69b-4e1edd883e14] 2025-07-29 00:03:18.016656 | orchestrator | 00:03:18.016 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-29 00:03:18.087450 | orchestrator | 00:03:18.087 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=f6455ccf-10a2-43cc-9d32-ba8a4d4a21a4] 2025-07-29 00:03:18.114088 | orchestrator | 00:03:18.113 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=70fc26b0-5220-4a9a-a265-840f0e9385f2] 2025-07-29 00:03:18.204347 | orchestrator | 00:03:18.203 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=62ef0d41-6fd8-495e-a051-7f48574d5bbd] 2025-07-29 00:03:18.388868 | orchestrator | 00:03:18.388 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=c1b4c9ed-3eb2-42e3-aaef-6e976317a9f4] 2025-07-29 00:03:18.475132 | orchestrator | 00:03:18.474 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=1bab64b2-7a89-4b02-8c0e-cdd6a6cf4c50] 2025-07-29 00:03:18.503834 | orchestrator | 00:03:18.503 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=a94dd7dd-b2d2-4a35-92b4-9f77a3faa69f] 2025-07-29 00:03:18.618904 | orchestrator | 00:03:18.618 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=2f081021-198a-4cbc-bd7f-3b769825b3bc] 2025-07-29 00:03:18.727629 | orchestrator | 00:03:18.727 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=0f42e152-ffac-449b-b7ad-2a39bc26dd44] 2025-07-29 00:03:18.801481 | orchestrator | 00:03:18.801 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=8d1ebf95-cfd2-4ffc-be1f-17615c8591fc] 2025-07-29 00:03:19.545340 | orchestrator | 00:03:19.544 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=47c1f8ea-3db8-447b-ba64-34d93c34da9c] 2025-07-29 00:03:19.568609 | orchestrator | 00:03:19.568 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-29 00:03:19.681982 | orchestrator | 00:03:19.585 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-29 00:03:19.682074 | orchestrator | 00:03:19.591 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-29 00:03:19.682084 | orchestrator | 00:03:19.592 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-29 00:03:19.682090 | orchestrator | 00:03:19.595 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-29 00:03:19.682097 | orchestrator | 00:03:19.605 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-29 00:03:19.682102 | orchestrator | 00:03:19.605 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-29 00:03:21.341666 | orchestrator | 00:03:21.341 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=a9f25a28-038f-4d47-8bec-dc3950b29ca0] 2025-07-29 00:03:21.355116 | orchestrator | 00:03:21.354 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-29 00:03:21.356004 | orchestrator | 00:03:21.355 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-29 00:03:21.358710 | orchestrator | 00:03:21.358 STDOUT terraform: local_file.inventory: Creating... 2025-07-29 00:03:21.360664 | orchestrator | 00:03:21.360 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=312d13a74b1baced3c993a99db7a79434788e58b] 2025-07-29 00:03:21.362493 | orchestrator | 00:03:21.362 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=fd5446ae0c6102f1caed1bd5cb2556ce799f341f] 2025-07-29 00:03:22.645913 | orchestrator | 00:03:22.645 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=a9f25a28-038f-4d47-8bec-dc3950b29ca0] 2025-07-29 00:03:29.594986 | orchestrator | 00:03:29.594 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-29 00:03:29.595110 | orchestrator | 00:03:29.594 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-29 00:03:29.595162 | orchestrator | 00:03:29.595 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-29 00:03:29.599124 | orchestrator | 00:03:29.598 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-29 00:03:29.606420 | orchestrator | 00:03:29.606 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-29 00:03:29.607653 | orchestrator | 00:03:29.607 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-29 00:03:39.595329 | orchestrator | 00:03:39.594 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-29 00:03:39.595514 | orchestrator | 00:03:39.595 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-29 00:03:39.595545 | orchestrator | 00:03:39.595 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-29 00:03:39.599836 | orchestrator | 00:03:39.599 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-29 00:03:39.607387 | orchestrator | 00:03:39.607 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-29 00:03:39.608506 | orchestrator | 00:03:39.608 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-29 00:03:49.596863 | orchestrator | 00:03:49.596 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-29 00:03:49.596996 | orchestrator | 00:03:49.596 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-29 00:03:49.597160 | orchestrator | 00:03:49.596 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-29 00:03:49.599907 | orchestrator | 00:03:49.599 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-29 00:03:49.608249 | orchestrator | 00:03:49.608 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-29 00:03:49.609339 | orchestrator | 00:03:49.609 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-29 00:03:50.487935 | orchestrator | 00:03:50.487 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=9127af4b-cacb-478d-a97c-a6354e3434c7] 2025-07-29 00:03:50.506428 | orchestrator | 00:03:50.505 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=3fee582e-0e08-498b-bbe6-f030ef8e54c2] 2025-07-29 00:03:59.597232 | orchestrator | 00:03:59.596 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-07-29 00:03:59.597362 | orchestrator | 00:03:59.597 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-07-29 00:03:59.609477 | orchestrator | 00:03:59.609 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-07-29 00:03:59.609573 | orchestrator | 00:03:59.609 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-07-29 00:04:00.438825 | orchestrator | 00:04:00.438 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 40s [id=c479a99d-0912-4350-bd50-b091de4224c1] 2025-07-29 00:04:00.485333 | orchestrator | 00:04:00.484 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 40s [id=62383747-9aaf-418c-a0db-feeb01e85ff4] 2025-07-29 00:04:00.663736 | orchestrator | 00:04:00.663 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=6430e973-eee5-4650-a61d-d7ef878292ba] 2025-07-29 00:04:00.971137 | orchestrator | 00:04:00.970 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=094dc2da-f516-4772-b3a5-ccf5c7586701] 2025-07-29 00:04:00.994291 | orchestrator | 00:04:00.994 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-29 00:04:00.999627 | orchestrator | 00:04:00.999 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-29 00:04:00.999868 | orchestrator | 00:04:00.999 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3553462147451838596] 2025-07-29 00:04:01.002480 | orchestrator | 00:04:01.002 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-29 00:04:01.003885 | orchestrator | 00:04:01.003 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-29 00:04:01.004300 | orchestrator | 00:04:01.004 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-29 00:04:01.004657 | orchestrator | 00:04:01.004 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-29 00:04:01.005087 | orchestrator | 00:04:01.004 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-29 00:04:01.005462 | orchestrator | 00:04:01.005 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-29 00:04:01.013630 | orchestrator | 00:04:01.013 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-29 00:04:01.015361 | orchestrator | 00:04:01.015 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-29 00:04:01.042665 | orchestrator | 00:04:01.042 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-29 00:04:04.407072 | orchestrator | 00:04:04.406 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=3fee582e-0e08-498b-bbe6-f030ef8e54c2/82b48d1c-8775-4cb2-966c-5616edee7deb] 2025-07-29 00:04:04.447603 | orchestrator | 00:04:04.447 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=9127af4b-cacb-478d-a97c-a6354e3434c7/956d39d7-e001-46ef-a843-06a79e594541] 2025-07-29 00:04:04.453011 | orchestrator | 00:04:04.452 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=094dc2da-f516-4772-b3a5-ccf5c7586701/8027c030-960e-4a8a-9751-19a4b577f9bf] 2025-07-29 00:04:04.485225 | orchestrator | 00:04:04.484 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=3fee582e-0e08-498b-bbe6-f030ef8e54c2/19987853-4fae-4f70-a96c-4dea60f77e24] 2025-07-29 00:04:04.492461 | orchestrator | 00:04:04.492 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=9127af4b-cacb-478d-a97c-a6354e3434c7/6c0b0a23-d9a7-48b1-8c64-e5d37e60ce96] 2025-07-29 00:04:04.497842 | orchestrator | 00:04:04.497 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=094dc2da-f516-4772-b3a5-ccf5c7586701/0485018f-3fe9-4f25-8b22-82588b164e9e] 2025-07-29 00:04:07.180763 | orchestrator | 00:04:07.180 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=9127af4b-cacb-478d-a97c-a6354e3434c7/926c30bf-2367-4094-ad64-36ebb6a831d5] 2025-07-29 00:04:10.579486 | orchestrator | 00:04:10.578 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=094dc2da-f516-4772-b3a5-ccf5c7586701/e07f6824-79e8-4182-b82e-3eac6c81ca1f] 2025-07-29 00:04:10.620453 | orchestrator | 00:04:10.619 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=3fee582e-0e08-498b-bbe6-f030ef8e54c2/16d1ba37-2de0-45d8-9aa4-d45f889f34a3] 2025-07-29 00:04:11.043887 | orchestrator | 00:04:11.043 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-29 00:04:21.045173 | orchestrator | 00:04:21.044 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-29 00:04:21.660950 | orchestrator | 00:04:21.660 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=235f3c42-5812-491c-ac7b-9d58e181dc40] 2025-07-29 00:04:21.676821 | orchestrator | 00:04:21.676 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-29 00:04:21.676920 | orchestrator | 00:04:21.676 STDOUT terraform: Outputs: 2025-07-29 00:04:21.676971 | orchestrator | 00:04:21.676 STDOUT terraform: manager_address = 2025-07-29 00:04:21.676996 | orchestrator | 00:04:21.676 STDOUT terraform: private_key = 2025-07-29 00:04:21.764763 | orchestrator | ok: Runtime: 0:01:23.248975 2025-07-29 00:04:21.795069 | 2025-07-29 00:04:21.795196 | TASK [Create infrastructure (stable)] 2025-07-29 00:04:22.329359 | orchestrator | skipping: Conditional result was False 2025-07-29 00:04:22.348758 | 2025-07-29 00:04:22.348914 | TASK [Fetch manager address] 2025-07-29 00:04:22.784395 | orchestrator | ok 2025-07-29 00:04:22.795188 | 2025-07-29 00:04:22.795363 | TASK [Set manager_host address] 2025-07-29 00:04:22.884136 | orchestrator | ok 2025-07-29 00:04:22.892958 | 2025-07-29 00:04:22.893065 | LOOP [Update ansible collections] 2025-07-29 00:04:24.179241 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-29 00:04:24.179636 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-29 00:04:24.179685 | orchestrator | Starting galaxy collection install process 2025-07-29 00:04:24.179717 | orchestrator | Process install dependency map 2025-07-29 00:04:24.179745 | orchestrator | Starting collection install process 2025-07-29 00:04:24.179770 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-07-29 00:04:24.179799 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-07-29 00:04:24.179831 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-29 00:04:24.179891 | orchestrator | ok: Item: commons Runtime: 0:00:00.969376 2025-07-29 00:04:25.112779 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-29 00:04:25.112926 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-29 00:04:25.113731 | orchestrator | Starting galaxy collection install process 2025-07-29 00:04:25.113956 | orchestrator | Process install dependency map 2025-07-29 00:04:25.114044 | orchestrator | Starting collection install process 2025-07-29 00:04:25.114108 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-07-29 00:04:25.114169 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-07-29 00:04:25.114248 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-29 00:04:25.114383 | orchestrator | ok: Item: services Runtime: 0:00:00.678531 2025-07-29 00:04:25.139254 | 2025-07-29 00:04:25.139469 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-29 00:04:35.901164 | orchestrator | ok 2025-07-29 00:04:35.912624 | 2025-07-29 00:04:35.912769 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-29 00:05:35.954384 | orchestrator | ok 2025-07-29 00:05:35.963772 | 2025-07-29 00:05:35.963885 | TASK [Fetch manager ssh hostkey] 2025-07-29 00:05:37.534120 | orchestrator | Output suppressed because no_log was given 2025-07-29 00:05:37.548555 | 2025-07-29 00:05:37.548722 | TASK [Get ssh keypair from terraform environment] 2025-07-29 00:05:38.084484 | orchestrator | ok: Runtime: 0:00:00.011434 2025-07-29 00:05:38.099799 | 2025-07-29 00:05:38.099953 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-29 00:05:38.145730 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-29 00:05:38.155441 | 2025-07-29 00:05:38.155562 | TASK [Run manager part 0] 2025-07-29 00:05:39.209251 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-29 00:05:39.266964 | orchestrator | 2025-07-29 00:05:39.267024 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-29 00:05:39.267034 | orchestrator | 2025-07-29 00:05:39.267048 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-29 00:05:41.097236 | orchestrator | ok: [testbed-manager] 2025-07-29 00:05:41.097336 | orchestrator | 2025-07-29 00:05:41.097391 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-29 00:05:41.097414 | orchestrator | 2025-07-29 00:05:41.097437 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:05:43.013226 | orchestrator | ok: [testbed-manager] 2025-07-29 00:05:43.013285 | orchestrator | 2025-07-29 00:05:43.013294 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-29 00:05:43.725628 | orchestrator | ok: [testbed-manager] 2025-07-29 00:05:43.725683 | orchestrator | 2025-07-29 00:05:43.725691 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-29 00:05:43.777655 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:05:43.777704 | orchestrator | 2025-07-29 00:05:43.777713 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-29 00:05:43.819298 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:05:43.819345 | orchestrator | 2025-07-29 00:05:43.819352 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-29 00:05:43.857462 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:05:43.857518 | orchestrator | 2025-07-29 00:05:43.857526 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-29 00:05:43.883638 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:05:43.883685 | orchestrator | 2025-07-29 00:05:43.883691 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-29 00:05:43.919130 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:05:43.919177 | orchestrator | 2025-07-29 00:05:43.919186 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-29 00:05:43.950181 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:05:43.950231 | orchestrator | 2025-07-29 00:05:43.950239 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-29 00:05:43.979274 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:05:43.979308 | orchestrator | 2025-07-29 00:05:43.979314 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-29 00:05:44.774031 | orchestrator | changed: [testbed-manager] 2025-07-29 00:05:44.774108 | orchestrator | 2025-07-29 00:05:44.774118 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-29 00:08:19.522777 | orchestrator | changed: [testbed-manager] 2025-07-29 00:08:19.522843 | orchestrator | 2025-07-29 00:08:19.522858 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-29 00:09:43.515133 | orchestrator | changed: [testbed-manager] 2025-07-29 00:09:43.515179 | orchestrator | 2025-07-29 00:09:43.515188 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-29 00:10:12.197280 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:12.197357 | orchestrator | 2025-07-29 00:10:12.197375 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-29 00:10:21.050544 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:21.050688 | orchestrator | 2025-07-29 00:10:21.050703 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-29 00:10:21.089890 | orchestrator | ok: [testbed-manager] 2025-07-29 00:10:21.089940 | orchestrator | 2025-07-29 00:10:21.089950 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-29 00:10:21.894431 | orchestrator | ok: [testbed-manager] 2025-07-29 00:10:21.894496 | orchestrator | 2025-07-29 00:10:21.894514 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-29 00:10:22.652657 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:22.652702 | orchestrator | 2025-07-29 00:10:22.652711 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-29 00:10:29.005533 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:29.005574 | orchestrator | 2025-07-29 00:10:29.005596 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-29 00:10:34.888236 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:34.888336 | orchestrator | 2025-07-29 00:10:34.888355 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-29 00:10:37.615501 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:37.616329 | orchestrator | 2025-07-29 00:10:37.616353 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-29 00:10:39.407257 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:39.407318 | orchestrator | 2025-07-29 00:10:39.407328 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-29 00:10:40.580252 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-29 00:10:40.580330 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-29 00:10:40.580345 | orchestrator | 2025-07-29 00:10:40.580357 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-29 00:10:40.625588 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-29 00:10:40.625678 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-29 00:10:40.625699 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-29 00:10:40.625715 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-29 00:10:44.569967 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-29 00:10:44.570096 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-29 00:10:44.570114 | orchestrator | 2025-07-29 00:10:44.570127 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-29 00:10:45.157249 | orchestrator | changed: [testbed-manager] 2025-07-29 00:10:45.157288 | orchestrator | 2025-07-29 00:10:45.157296 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-29 00:11:05.105794 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-29 00:11:05.106013 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-29 00:11:05.106092 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-29 00:11:05.106106 | orchestrator | 2025-07-29 00:11:05.106120 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-29 00:11:07.445665 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-29 00:11:07.445702 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-29 00:11:07.445707 | orchestrator | 2025-07-29 00:11:07.445712 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-29 00:11:07.445718 | orchestrator | 2025-07-29 00:11:07.445722 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:11:08.858046 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:08.858081 | orchestrator | 2025-07-29 00:11:08.858088 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-29 00:11:08.906228 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:08.906264 | orchestrator | 2025-07-29 00:11:08.906271 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-29 00:11:08.975608 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:08.975646 | orchestrator | 2025-07-29 00:11:08.975652 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-29 00:11:09.815352 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:09.815390 | orchestrator | 2025-07-29 00:11:09.815395 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-29 00:11:10.500496 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:10.500539 | orchestrator | 2025-07-29 00:11:10.500547 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-29 00:11:11.912564 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-29 00:11:11.912608 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-29 00:11:11.912616 | orchestrator | 2025-07-29 00:11:11.912631 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-29 00:11:13.268796 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:13.268876 | orchestrator | 2025-07-29 00:11:13.268888 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-29 00:11:15.044922 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-29 00:11:15.044979 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-29 00:11:15.044992 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-29 00:11:15.045003 | orchestrator | 2025-07-29 00:11:15.045015 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-29 00:11:15.109096 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:15.109134 | orchestrator | 2025-07-29 00:11:15.109141 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-29 00:11:15.665195 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:15.665237 | orchestrator | 2025-07-29 00:11:15.665247 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-29 00:11:15.735602 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:15.735649 | orchestrator | 2025-07-29 00:11:15.735657 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-29 00:11:16.605947 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-29 00:11:16.606049 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:16.606068 | orchestrator | 2025-07-29 00:11:16.606081 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-29 00:11:16.645585 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:16.645647 | orchestrator | 2025-07-29 00:11:16.645661 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-29 00:11:16.677386 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:16.677448 | orchestrator | 2025-07-29 00:11:16.677462 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-29 00:11:16.715825 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:16.715886 | orchestrator | 2025-07-29 00:11:16.715899 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-29 00:11:16.772051 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:16.772135 | orchestrator | 2025-07-29 00:11:16.772161 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-29 00:11:17.500074 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:17.500111 | orchestrator | 2025-07-29 00:11:17.500117 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-29 00:11:17.500122 | orchestrator | 2025-07-29 00:11:17.500126 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:11:18.935034 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:18.935070 | orchestrator | 2025-07-29 00:11:18.935075 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-29 00:11:19.924073 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:19.924111 | orchestrator | 2025-07-29 00:11:19.924117 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-29 00:11:19.924123 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-29 00:11:19.924127 | orchestrator | 2025-07-29 00:11:20.422060 | orchestrator | ok: Runtime: 0:05:41.556291 2025-07-29 00:11:20.439483 | 2025-07-29 00:11:20.439639 | TASK [Point out that the log in on the manager is now possible] 2025-07-29 00:11:20.478117 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-29 00:11:20.490207 | 2025-07-29 00:11:20.490352 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-29 00:11:20.540124 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-29 00:11:20.549257 | 2025-07-29 00:11:20.549389 | TASK [Run manager part 1 + 2] 2025-07-29 00:11:21.419125 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-29 00:11:21.475291 | orchestrator | 2025-07-29 00:11:21.475343 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-29 00:11:21.475350 | orchestrator | 2025-07-29 00:11:21.475362 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:11:24.098659 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:24.098719 | orchestrator | 2025-07-29 00:11:24.098786 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-29 00:11:24.127520 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:24.127565 | orchestrator | 2025-07-29 00:11:24.127572 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-29 00:11:24.169541 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:24.169593 | orchestrator | 2025-07-29 00:11:24.169602 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-29 00:11:24.210044 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:24.210101 | orchestrator | 2025-07-29 00:11:24.210112 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-29 00:11:24.276133 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:24.276189 | orchestrator | 2025-07-29 00:11:24.276198 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-29 00:11:24.343338 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:24.343407 | orchestrator | 2025-07-29 00:11:24.343421 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-29 00:11:24.401517 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-29 00:11:24.401569 | orchestrator | 2025-07-29 00:11:24.401576 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-29 00:11:25.174548 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:25.174621 | orchestrator | 2025-07-29 00:11:25.174631 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-29 00:11:25.225323 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:25.225378 | orchestrator | 2025-07-29 00:11:25.225385 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-29 00:11:26.716338 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:26.716392 | orchestrator | 2025-07-29 00:11:26.716402 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-29 00:11:27.319638 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:27.319715 | orchestrator | 2025-07-29 00:11:27.319724 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-29 00:11:28.541502 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:28.541577 | orchestrator | 2025-07-29 00:11:28.541593 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-29 00:11:44.507668 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:44.507869 | orchestrator | 2025-07-29 00:11:44.507891 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-29 00:11:45.215047 | orchestrator | ok: [testbed-manager] 2025-07-29 00:11:45.215141 | orchestrator | 2025-07-29 00:11:45.215160 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-29 00:11:45.272806 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:11:45.272881 | orchestrator | 2025-07-29 00:11:45.272927 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-29 00:11:46.207684 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:46.207801 | orchestrator | 2025-07-29 00:11:46.207822 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-29 00:11:47.185531 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:47.185586 | orchestrator | 2025-07-29 00:11:47.185594 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-29 00:11:47.736699 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:47.736822 | orchestrator | 2025-07-29 00:11:47.736838 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-29 00:11:47.777235 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-29 00:11:47.777300 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-29 00:11:47.777306 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-29 00:11:47.777311 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-29 00:11:50.210755 | orchestrator | changed: [testbed-manager] 2025-07-29 00:11:50.210831 | orchestrator | 2025-07-29 00:11:50.210843 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-29 00:11:59.241308 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-29 00:11:59.241365 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-29 00:11:59.241379 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-29 00:11:59.241390 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-29 00:11:59.241405 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-29 00:11:59.241416 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-29 00:11:59.241427 | orchestrator | 2025-07-29 00:11:59.241439 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-29 00:12:00.302269 | orchestrator | changed: [testbed-manager] 2025-07-29 00:12:00.302310 | orchestrator | 2025-07-29 00:12:00.302318 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-29 00:12:00.348193 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:12:00.348442 | orchestrator | 2025-07-29 00:12:00.348471 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-29 00:12:03.569254 | orchestrator | changed: [testbed-manager] 2025-07-29 00:12:03.569322 | orchestrator | 2025-07-29 00:12:03.569330 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-29 00:12:03.613438 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:12:03.613528 | orchestrator | 2025-07-29 00:12:03.613545 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-29 00:13:38.713937 | orchestrator | changed: [testbed-manager] 2025-07-29 00:13:38.713998 | orchestrator | 2025-07-29 00:13:38.714006 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-29 00:13:39.901345 | orchestrator | ok: [testbed-manager] 2025-07-29 00:13:39.901383 | orchestrator | 2025-07-29 00:13:39.901390 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-29 00:13:39.901397 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-29 00:13:39.901402 | orchestrator | 2025-07-29 00:13:40.186321 | orchestrator | ok: Runtime: 0:02:19.127071 2025-07-29 00:13:40.203010 | 2025-07-29 00:13:40.203176 | TASK [Reboot manager] 2025-07-29 00:13:41.741236 | orchestrator | ok: Runtime: 0:00:00.976597 2025-07-29 00:13:41.757891 | 2025-07-29 00:13:41.758048 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-29 00:13:56.336254 | orchestrator | ok 2025-07-29 00:13:56.343973 | 2025-07-29 00:13:56.344107 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-29 00:14:56.385357 | orchestrator | ok 2025-07-29 00:14:56.395320 | 2025-07-29 00:14:56.395456 | TASK [Deploy manager + bootstrap nodes] 2025-07-29 00:14:59.169129 | orchestrator | 2025-07-29 00:14:59.169327 | orchestrator | # DEPLOY MANAGER 2025-07-29 00:14:59.169350 | orchestrator | 2025-07-29 00:14:59.169364 | orchestrator | + set -e 2025-07-29 00:14:59.169377 | orchestrator | + echo 2025-07-29 00:14:59.169391 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-29 00:14:59.169408 | orchestrator | + echo 2025-07-29 00:14:59.169457 | orchestrator | + cat /opt/manager-vars.sh 2025-07-29 00:14:59.173150 | orchestrator | export NUMBER_OF_NODES=6 2025-07-29 00:14:59.173269 | orchestrator | 2025-07-29 00:14:59.173288 | orchestrator | export CEPH_VERSION=reef 2025-07-29 00:14:59.173302 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-29 00:14:59.173315 | orchestrator | export MANAGER_VERSION=latest 2025-07-29 00:14:59.173343 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-29 00:14:59.173355 | orchestrator | 2025-07-29 00:14:59.173373 | orchestrator | export ARA=false 2025-07-29 00:14:59.173385 | orchestrator | export DEPLOY_MODE=manager 2025-07-29 00:14:59.173402 | orchestrator | export TEMPEST=true 2025-07-29 00:14:59.173414 | orchestrator | export IS_ZUUL=true 2025-07-29 00:14:59.173425 | orchestrator | 2025-07-29 00:14:59.173443 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.1 2025-07-29 00:14:59.173454 | orchestrator | export EXTERNAL_API=false 2025-07-29 00:14:59.173465 | orchestrator | 2025-07-29 00:14:59.173476 | orchestrator | export IMAGE_USER=ubuntu 2025-07-29 00:14:59.173490 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-29 00:14:59.173501 | orchestrator | 2025-07-29 00:14:59.173511 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-29 00:14:59.173534 | orchestrator | 2025-07-29 00:14:59.173546 | orchestrator | + echo 2025-07-29 00:14:59.173564 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-29 00:14:59.174107 | orchestrator | ++ export INTERACTIVE=false 2025-07-29 00:14:59.174140 | orchestrator | ++ INTERACTIVE=false 2025-07-29 00:14:59.174162 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-29 00:14:59.174178 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-29 00:14:59.174204 | orchestrator | + source /opt/manager-vars.sh 2025-07-29 00:14:59.174293 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-29 00:14:59.174309 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-29 00:14:59.174320 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-29 00:14:59.174343 | orchestrator | ++ CEPH_VERSION=reef 2025-07-29 00:14:59.174363 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-29 00:14:59.174382 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-29 00:14:59.174398 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-29 00:14:59.174409 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-29 00:14:59.174420 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-29 00:14:59.174442 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-29 00:14:59.174453 | orchestrator | ++ export ARA=false 2025-07-29 00:14:59.174473 | orchestrator | ++ ARA=false 2025-07-29 00:14:59.174493 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-29 00:14:59.174510 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-29 00:14:59.174521 | orchestrator | ++ export TEMPEST=true 2025-07-29 00:14:59.174532 | orchestrator | ++ TEMPEST=true 2025-07-29 00:14:59.174543 | orchestrator | ++ export IS_ZUUL=true 2025-07-29 00:14:59.174553 | orchestrator | ++ IS_ZUUL=true 2025-07-29 00:14:59.174564 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.1 2025-07-29 00:14:59.174582 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.1 2025-07-29 00:14:59.174594 | orchestrator | ++ export EXTERNAL_API=false 2025-07-29 00:14:59.174605 | orchestrator | ++ EXTERNAL_API=false 2025-07-29 00:14:59.174615 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-29 00:14:59.174626 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-29 00:14:59.174637 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-29 00:14:59.174651 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-29 00:14:59.174663 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-29 00:14:59.174674 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-29 00:14:59.174685 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-29 00:14:59.248394 | orchestrator | + docker version 2025-07-29 00:14:59.543040 | orchestrator | Client: Docker Engine - Community 2025-07-29 00:14:59.543159 | orchestrator | Version: 27.5.1 2025-07-29 00:14:59.543173 | orchestrator | API version: 1.47 2025-07-29 00:14:59.543188 | orchestrator | Go version: go1.22.11 2025-07-29 00:14:59.543199 | orchestrator | Git commit: 9f9e405 2025-07-29 00:14:59.543210 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-29 00:14:59.543223 | orchestrator | OS/Arch: linux/amd64 2025-07-29 00:14:59.543234 | orchestrator | Context: default 2025-07-29 00:14:59.543245 | orchestrator | 2025-07-29 00:14:59.543256 | orchestrator | Server: Docker Engine - Community 2025-07-29 00:14:59.543267 | orchestrator | Engine: 2025-07-29 00:14:59.543279 | orchestrator | Version: 27.5.1 2025-07-29 00:14:59.543290 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-29 00:14:59.543333 | orchestrator | Go version: go1.22.11 2025-07-29 00:14:59.543345 | orchestrator | Git commit: 4c9b3b0 2025-07-29 00:14:59.543355 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-29 00:14:59.543366 | orchestrator | OS/Arch: linux/amd64 2025-07-29 00:14:59.543377 | orchestrator | Experimental: false 2025-07-29 00:14:59.543387 | orchestrator | containerd: 2025-07-29 00:14:59.543398 | orchestrator | Version: 1.7.27 2025-07-29 00:14:59.543409 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-29 00:14:59.543420 | orchestrator | runc: 2025-07-29 00:14:59.543431 | orchestrator | Version: 1.2.5 2025-07-29 00:14:59.543442 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-29 00:14:59.543453 | orchestrator | docker-init: 2025-07-29 00:14:59.543464 | orchestrator | Version: 0.19.0 2025-07-29 00:14:59.543475 | orchestrator | GitCommit: de40ad0 2025-07-29 00:14:59.546767 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-29 00:14:59.557791 | orchestrator | + set -e 2025-07-29 00:14:59.557843 | orchestrator | + source /opt/manager-vars.sh 2025-07-29 00:14:59.557865 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-29 00:14:59.557897 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-29 00:14:59.557917 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-29 00:14:59.557938 | orchestrator | ++ CEPH_VERSION=reef 2025-07-29 00:14:59.557956 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-29 00:14:59.557976 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-29 00:14:59.557994 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-29 00:14:59.558014 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-29 00:14:59.558101 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-29 00:14:59.558122 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-29 00:14:59.558142 | orchestrator | ++ export ARA=false 2025-07-29 00:14:59.558163 | orchestrator | ++ ARA=false 2025-07-29 00:14:59.558192 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-29 00:14:59.558208 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-29 00:14:59.558239 | orchestrator | ++ export TEMPEST=true 2025-07-29 00:14:59.558255 | orchestrator | ++ TEMPEST=true 2025-07-29 00:14:59.558266 | orchestrator | ++ export IS_ZUUL=true 2025-07-29 00:14:59.558277 | orchestrator | ++ IS_ZUUL=true 2025-07-29 00:14:59.558296 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.1 2025-07-29 00:14:59.558315 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.1 2025-07-29 00:14:59.558343 | orchestrator | ++ export EXTERNAL_API=false 2025-07-29 00:14:59.558362 | orchestrator | ++ EXTERNAL_API=false 2025-07-29 00:14:59.558375 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-29 00:14:59.558385 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-29 00:14:59.558396 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-29 00:14:59.558406 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-29 00:14:59.558422 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-29 00:14:59.558440 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-29 00:14:59.558459 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-29 00:14:59.558478 | orchestrator | ++ export INTERACTIVE=false 2025-07-29 00:14:59.558489 | orchestrator | ++ INTERACTIVE=false 2025-07-29 00:14:59.558500 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-29 00:14:59.558516 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-29 00:14:59.558527 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-29 00:14:59.558538 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-29 00:14:59.558554 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-29 00:14:59.567060 | orchestrator | + set -e 2025-07-29 00:14:59.567149 | orchestrator | + VERSION=reef 2025-07-29 00:14:59.568296 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-29 00:14:59.575060 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-29 00:14:59.575091 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-29 00:14:59.581888 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-29 00:14:59.589392 | orchestrator | + set -e 2025-07-29 00:14:59.589444 | orchestrator | + VERSION=2024.2 2025-07-29 00:14:59.590456 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-29 00:14:59.594527 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-29 00:14:59.594595 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-29 00:14:59.601318 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-29 00:14:59.602292 | orchestrator | ++ semver latest 7.0.0 2025-07-29 00:14:59.675398 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-29 00:14:59.675511 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-29 00:14:59.675527 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-29 00:14:59.675539 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-29 00:14:59.775290 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-29 00:14:59.781830 | orchestrator | + source /opt/venv/bin/activate 2025-07-29 00:14:59.782922 | orchestrator | ++ deactivate nondestructive 2025-07-29 00:14:59.782956 | orchestrator | ++ '[' -n '' ']' 2025-07-29 00:14:59.782969 | orchestrator | ++ '[' -n '' ']' 2025-07-29 00:14:59.782983 | orchestrator | ++ hash -r 2025-07-29 00:14:59.783009 | orchestrator | ++ '[' -n '' ']' 2025-07-29 00:14:59.783025 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-29 00:14:59.783037 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-29 00:14:59.783222 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-29 00:14:59.783260 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-29 00:14:59.783282 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-29 00:14:59.783293 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-29 00:14:59.783304 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-29 00:14:59.783321 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-29 00:14:59.783347 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-29 00:14:59.783367 | orchestrator | ++ export PATH 2025-07-29 00:14:59.783387 | orchestrator | ++ '[' -n '' ']' 2025-07-29 00:14:59.783622 | orchestrator | ++ '[' -z '' ']' 2025-07-29 00:14:59.783641 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-29 00:14:59.783652 | orchestrator | ++ PS1='(venv) ' 2025-07-29 00:14:59.783663 | orchestrator | ++ export PS1 2025-07-29 00:14:59.783674 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-29 00:14:59.783685 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-29 00:14:59.783695 | orchestrator | ++ hash -r 2025-07-29 00:14:59.784091 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-29 00:15:01.163527 | orchestrator | 2025-07-29 00:15:01.163662 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-29 00:15:01.163680 | orchestrator | 2025-07-29 00:15:01.163761 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-29 00:15:01.755909 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:01.756045 | orchestrator | 2025-07-29 00:15:01.756065 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-29 00:15:02.796287 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:02.796407 | orchestrator | 2025-07-29 00:15:02.796424 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-29 00:15:02.796437 | orchestrator | 2025-07-29 00:15:02.796448 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:15:05.369154 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:05.369230 | orchestrator | 2025-07-29 00:15:05.369237 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-29 00:15:05.419123 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:05.419188 | orchestrator | 2025-07-29 00:15:05.419197 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-29 00:15:05.879985 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:05.880100 | orchestrator | 2025-07-29 00:15:05.880117 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-29 00:15:05.922448 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:05.922534 | orchestrator | 2025-07-29 00:15:05.922548 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-29 00:15:06.282306 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:06.282441 | orchestrator | 2025-07-29 00:15:06.282458 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-29 00:15:06.339387 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:06.339502 | orchestrator | 2025-07-29 00:15:06.339519 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-29 00:15:06.670463 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:06.670571 | orchestrator | 2025-07-29 00:15:06.670587 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-29 00:15:06.792540 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:06.792649 | orchestrator | 2025-07-29 00:15:06.792667 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-29 00:15:06.792681 | orchestrator | 2025-07-29 00:15:06.792695 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:15:08.608595 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:08.608695 | orchestrator | 2025-07-29 00:15:08.608749 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-29 00:15:08.724645 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-29 00:15:08.724807 | orchestrator | 2025-07-29 00:15:08.724825 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-29 00:15:08.779548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-29 00:15:08.779655 | orchestrator | 2025-07-29 00:15:08.779673 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-29 00:15:09.904663 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-29 00:15:09.904836 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-29 00:15:09.904852 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-29 00:15:09.904864 | orchestrator | 2025-07-29 00:15:09.904876 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-29 00:15:11.696972 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-29 00:15:11.697071 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-29 00:15:11.697089 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-29 00:15:11.697101 | orchestrator | 2025-07-29 00:15:11.697114 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-29 00:15:12.340542 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-29 00:15:12.340651 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:12.340667 | orchestrator | 2025-07-29 00:15:12.340680 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-29 00:15:12.994249 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-29 00:15:12.994339 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:12.994351 | orchestrator | 2025-07-29 00:15:12.994361 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-29 00:15:13.053973 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:13.054139 | orchestrator | 2025-07-29 00:15:13.054164 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-29 00:15:13.422552 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:13.422664 | orchestrator | 2025-07-29 00:15:13.422681 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-29 00:15:13.504898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-29 00:15:13.504998 | orchestrator | 2025-07-29 00:15:13.505016 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-29 00:15:14.647450 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:14.647544 | orchestrator | 2025-07-29 00:15:14.647559 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-29 00:15:15.464064 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:15.464171 | orchestrator | 2025-07-29 00:15:15.464186 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-29 00:15:26.864800 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:26.864939 | orchestrator | 2025-07-29 00:15:26.864958 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-29 00:15:26.927099 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:26.927198 | orchestrator | 2025-07-29 00:15:26.927212 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-29 00:15:26.927224 | orchestrator | 2025-07-29 00:15:26.927234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:15:28.747335 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:28.747464 | orchestrator | 2025-07-29 00:15:28.747512 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-29 00:15:28.847556 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-29 00:15:28.847657 | orchestrator | 2025-07-29 00:15:28.847726 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-29 00:15:28.909097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-29 00:15:28.909182 | orchestrator | 2025-07-29 00:15:28.909192 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-29 00:15:31.472252 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:31.472355 | orchestrator | 2025-07-29 00:15:31.472370 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-29 00:15:31.530905 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:31.531011 | orchestrator | 2025-07-29 00:15:31.531029 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-29 00:15:31.663754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-29 00:15:31.663843 | orchestrator | 2025-07-29 00:15:31.663854 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-29 00:15:34.525073 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-29 00:15:34.525213 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-29 00:15:34.525237 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-29 00:15:34.525257 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-29 00:15:34.525275 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-29 00:15:34.525294 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-29 00:15:34.525312 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-29 00:15:34.525330 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-29 00:15:34.525349 | orchestrator | 2025-07-29 00:15:34.525369 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-29 00:15:35.160430 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:35.160533 | orchestrator | 2025-07-29 00:15:35.160548 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-29 00:15:35.817920 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:35.818077 | orchestrator | 2025-07-29 00:15:35.818096 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-29 00:15:35.900046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-29 00:15:35.900254 | orchestrator | 2025-07-29 00:15:35.900284 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-29 00:15:37.137495 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-29 00:15:37.137618 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-29 00:15:37.137645 | orchestrator | 2025-07-29 00:15:37.137722 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-29 00:15:37.768071 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:37.768172 | orchestrator | 2025-07-29 00:15:37.768189 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-29 00:15:37.835179 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:37.835327 | orchestrator | 2025-07-29 00:15:37.835352 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-29 00:15:37.901120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-29 00:15:37.901208 | orchestrator | 2025-07-29 00:15:37.901223 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-29 00:15:39.290139 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-29 00:15:39.290251 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-29 00:15:39.290267 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:39.290280 | orchestrator | 2025-07-29 00:15:39.290291 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-29 00:15:39.936063 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:39.936183 | orchestrator | 2025-07-29 00:15:39.936205 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-29 00:15:39.989193 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:39.989278 | orchestrator | 2025-07-29 00:15:39.989288 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-29 00:15:40.080222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-29 00:15:40.080324 | orchestrator | 2025-07-29 00:15:40.080339 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-29 00:15:40.610095 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:40.610203 | orchestrator | 2025-07-29 00:15:40.610221 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-29 00:15:41.031592 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:41.031756 | orchestrator | 2025-07-29 00:15:41.031846 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-29 00:15:42.296473 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-29 00:15:42.296700 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-29 00:15:42.296722 | orchestrator | 2025-07-29 00:15:42.296735 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-29 00:15:42.938348 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:42.938436 | orchestrator | 2025-07-29 00:15:42.938448 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-29 00:15:43.350473 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:43.350580 | orchestrator | 2025-07-29 00:15:43.350595 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-29 00:15:43.692156 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:43.692263 | orchestrator | 2025-07-29 00:15:43.692279 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-29 00:15:43.737086 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:43.737179 | orchestrator | 2025-07-29 00:15:43.737193 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-29 00:15:43.794518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-29 00:15:43.794613 | orchestrator | 2025-07-29 00:15:43.794628 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-29 00:15:43.836930 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:43.837026 | orchestrator | 2025-07-29 00:15:43.837042 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-29 00:15:45.855636 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-29 00:15:45.855819 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-29 00:15:45.855839 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-29 00:15:45.855860 | orchestrator | 2025-07-29 00:15:45.855880 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-29 00:15:46.611387 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:46.611492 | orchestrator | 2025-07-29 00:15:46.611508 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-29 00:15:47.335054 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:47.335166 | orchestrator | 2025-07-29 00:15:47.335185 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-29 00:15:48.051328 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:48.051434 | orchestrator | 2025-07-29 00:15:48.051449 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-29 00:15:48.121492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-29 00:15:48.121591 | orchestrator | 2025-07-29 00:15:48.121606 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-29 00:15:48.164018 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:48.164113 | orchestrator | 2025-07-29 00:15:48.164127 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-29 00:15:48.876264 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-29 00:15:48.876386 | orchestrator | 2025-07-29 00:15:48.876404 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-29 00:15:48.952119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-29 00:15:48.952223 | orchestrator | 2025-07-29 00:15:48.952236 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-29 00:15:49.680269 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:49.680374 | orchestrator | 2025-07-29 00:15:49.680389 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-29 00:15:50.345150 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:50.345278 | orchestrator | 2025-07-29 00:15:50.345296 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-29 00:15:50.394369 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:15:50.394474 | orchestrator | 2025-07-29 00:15:50.394490 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-29 00:15:50.447075 | orchestrator | ok: [testbed-manager] 2025-07-29 00:15:50.447173 | orchestrator | 2025-07-29 00:15:50.447188 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-29 00:15:51.272725 | orchestrator | changed: [testbed-manager] 2025-07-29 00:15:51.272841 | orchestrator | 2025-07-29 00:15:51.272859 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-29 00:17:09.582092 | orchestrator | changed: [testbed-manager] 2025-07-29 00:17:09.582235 | orchestrator | 2025-07-29 00:17:09.582253 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-29 00:17:10.615850 | orchestrator | ok: [testbed-manager] 2025-07-29 00:17:10.615983 | orchestrator | 2025-07-29 00:17:10.615999 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-29 00:17:10.678501 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:17:10.678644 | orchestrator | 2025-07-29 00:17:10.678660 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-29 00:17:13.071157 | orchestrator | changed: [testbed-manager] 2025-07-29 00:17:13.071266 | orchestrator | 2025-07-29 00:17:13.071285 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-29 00:17:13.122989 | orchestrator | ok: [testbed-manager] 2025-07-29 00:17:13.123083 | orchestrator | 2025-07-29 00:17:13.123096 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-29 00:17:13.123108 | orchestrator | 2025-07-29 00:17:13.123120 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-29 00:17:13.185767 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:17:13.185853 | orchestrator | 2025-07-29 00:17:13.185867 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-29 00:18:13.238707 | orchestrator | Pausing for 60 seconds 2025-07-29 00:18:13.238833 | orchestrator | changed: [testbed-manager] 2025-07-29 00:18:13.238849 | orchestrator | 2025-07-29 00:18:13.238862 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-29 00:18:17.022994 | orchestrator | changed: [testbed-manager] 2025-07-29 00:18:17.023106 | orchestrator | 2025-07-29 00:18:17.023122 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-29 00:18:58.845889 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-29 00:18:58.846072 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-29 00:18:58.846093 | orchestrator | changed: [testbed-manager] 2025-07-29 00:18:58.846107 | orchestrator | 2025-07-29 00:18:58.846119 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-29 00:19:09.164145 | orchestrator | changed: [testbed-manager] 2025-07-29 00:19:09.164267 | orchestrator | 2025-07-29 00:19:09.164285 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-29 00:19:09.260211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-29 00:19:09.260342 | orchestrator | 2025-07-29 00:19:09.260357 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-29 00:19:09.260368 | orchestrator | 2025-07-29 00:19:09.260380 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-29 00:19:09.316688 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:19:09.316780 | orchestrator | 2025-07-29 00:19:09.316799 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-29 00:19:09.316820 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-29 00:19:09.316838 | orchestrator | 2025-07-29 00:19:09.415770 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-29 00:19:09.415861 | orchestrator | + deactivate 2025-07-29 00:19:09.415875 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-29 00:19:09.415887 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-29 00:19:09.415897 | orchestrator | + export PATH 2025-07-29 00:19:09.415907 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-29 00:19:09.415918 | orchestrator | + '[' -n '' ']' 2025-07-29 00:19:09.415927 | orchestrator | + hash -r 2025-07-29 00:19:09.415937 | orchestrator | + '[' -n '' ']' 2025-07-29 00:19:09.415946 | orchestrator | + unset VIRTUAL_ENV 2025-07-29 00:19:09.415956 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-29 00:19:09.415986 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-29 00:19:09.415997 | orchestrator | + unset -f deactivate 2025-07-29 00:19:09.416007 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-29 00:19:09.422420 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-29 00:19:09.422449 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-29 00:19:09.422460 | orchestrator | + local max_attempts=60 2025-07-29 00:19:09.422469 | orchestrator | + local name=ceph-ansible 2025-07-29 00:19:09.422479 | orchestrator | + local attempt_num=1 2025-07-29 00:19:09.423845 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-29 00:19:09.471670 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-29 00:19:09.471756 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-29 00:19:09.471770 | orchestrator | + local max_attempts=60 2025-07-29 00:19:09.471781 | orchestrator | + local name=kolla-ansible 2025-07-29 00:19:09.471793 | orchestrator | + local attempt_num=1 2025-07-29 00:19:09.472005 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-29 00:19:09.523538 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-29 00:19:09.523615 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-29 00:19:09.523627 | orchestrator | + local max_attempts=60 2025-07-29 00:19:09.523638 | orchestrator | + local name=osism-ansible 2025-07-29 00:19:09.523649 | orchestrator | + local attempt_num=1 2025-07-29 00:19:09.524635 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-29 00:19:09.573107 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-29 00:19:09.573189 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-29 00:19:09.573202 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-29 00:19:10.395428 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-29 00:19:10.653198 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-29 00:19:10.653325 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653349 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653370 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-29 00:19:10.653453 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-29 00:19:10.653519 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653540 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653559 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-29 00:19:10.653579 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653598 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-29 00:19:10.653617 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653635 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-29 00:19:10.653654 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653673 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.653691 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-29 00:19:10.664150 | orchestrator | ++ semver latest 7.0.0 2025-07-29 00:19:10.731002 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-29 00:19:10.731101 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-29 00:19:10.731119 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-29 00:19:10.736151 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-29 00:19:23.040057 | orchestrator | 2025-07-29 00:19:23 | INFO  | Task 6190f7e1-73fd-43a5-ab6c-548b595926fb (resolvconf) was prepared for execution. 2025-07-29 00:19:23.040157 | orchestrator | 2025-07-29 00:19:23 | INFO  | It takes a moment until task 6190f7e1-73fd-43a5-ab6c-548b595926fb (resolvconf) has been started and output is visible here. 2025-07-29 00:19:42.329684 | orchestrator | 2025-07-29 00:19:42.329822 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-29 00:19:42.329841 | orchestrator | 2025-07-29 00:19:42.329856 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-29 00:19:42.329868 | orchestrator | Tuesday 29 July 2025 00:19:28 +0000 (0:00:00.105) 0:00:00.105 ********** 2025-07-29 00:19:42.329880 | orchestrator | ok: [testbed-manager] 2025-07-29 00:19:42.329891 | orchestrator | 2025-07-29 00:19:42.329902 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-29 00:19:42.329914 | orchestrator | Tuesday 29 July 2025 00:19:32 +0000 (0:00:04.045) 0:00:04.151 ********** 2025-07-29 00:19:42.329925 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:19:42.329936 | orchestrator | 2025-07-29 00:19:42.329952 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-29 00:19:42.329963 | orchestrator | Tuesday 29 July 2025 00:19:32 +0000 (0:00:00.070) 0:00:04.221 ********** 2025-07-29 00:19:42.329997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-29 00:19:42.330009 | orchestrator | 2025-07-29 00:19:42.330074 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-29 00:19:42.330086 | orchestrator | Tuesday 29 July 2025 00:19:32 +0000 (0:00:00.109) 0:00:04.331 ********** 2025-07-29 00:19:42.330097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-29 00:19:42.330108 | orchestrator | 2025-07-29 00:19:42.330119 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-29 00:19:42.330130 | orchestrator | Tuesday 29 July 2025 00:19:32 +0000 (0:00:00.079) 0:00:04.411 ********** 2025-07-29 00:19:42.330140 | orchestrator | ok: [testbed-manager] 2025-07-29 00:19:42.330151 | orchestrator | 2025-07-29 00:19:42.330162 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-29 00:19:42.330172 | orchestrator | Tuesday 29 July 2025 00:19:34 +0000 (0:00:01.703) 0:00:06.114 ********** 2025-07-29 00:19:42.330183 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:19:42.330194 | orchestrator | 2025-07-29 00:19:42.330205 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-29 00:19:42.330217 | orchestrator | Tuesday 29 July 2025 00:19:34 +0000 (0:00:00.067) 0:00:06.182 ********** 2025-07-29 00:19:42.330229 | orchestrator | ok: [testbed-manager] 2025-07-29 00:19:42.330241 | orchestrator | 2025-07-29 00:19:42.330254 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-29 00:19:42.330266 | orchestrator | Tuesday 29 July 2025 00:19:35 +0000 (0:00:00.747) 0:00:06.930 ********** 2025-07-29 00:19:42.330278 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:19:42.330290 | orchestrator | 2025-07-29 00:19:42.330302 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-29 00:19:42.330316 | orchestrator | Tuesday 29 July 2025 00:19:35 +0000 (0:00:00.092) 0:00:07.022 ********** 2025-07-29 00:19:42.330328 | orchestrator | changed: [testbed-manager] 2025-07-29 00:19:42.330340 | orchestrator | 2025-07-29 00:19:42.330371 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-29 00:19:42.330395 | orchestrator | Tuesday 29 July 2025 00:19:36 +0000 (0:00:01.066) 0:00:08.089 ********** 2025-07-29 00:19:42.330408 | orchestrator | changed: [testbed-manager] 2025-07-29 00:19:42.330419 | orchestrator | 2025-07-29 00:19:42.330432 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-29 00:19:42.330444 | orchestrator | Tuesday 29 July 2025 00:19:38 +0000 (0:00:01.822) 0:00:09.911 ********** 2025-07-29 00:19:42.330456 | orchestrator | ok: [testbed-manager] 2025-07-29 00:19:42.330468 | orchestrator | 2025-07-29 00:19:42.330480 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-29 00:19:42.330492 | orchestrator | Tuesday 29 July 2025 00:19:39 +0000 (0:00:01.484) 0:00:11.396 ********** 2025-07-29 00:19:42.330505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-29 00:19:42.330518 | orchestrator | 2025-07-29 00:19:42.330538 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-29 00:19:42.330551 | orchestrator | Tuesday 29 July 2025 00:19:39 +0000 (0:00:00.087) 0:00:11.484 ********** 2025-07-29 00:19:42.330563 | orchestrator | changed: [testbed-manager] 2025-07-29 00:19:42.330574 | orchestrator | 2025-07-29 00:19:42.330585 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-29 00:19:42.330596 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-29 00:19:42.330607 | orchestrator | 2025-07-29 00:19:42.330618 | orchestrator | 2025-07-29 00:19:42.330629 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-29 00:19:42.330648 | orchestrator | Tuesday 29 July 2025 00:19:41 +0000 (0:00:01.693) 0:00:13.177 ********** 2025-07-29 00:19:42.330659 | orchestrator | =============================================================================== 2025-07-29 00:19:42.330670 | orchestrator | Gathering Facts --------------------------------------------------------- 4.05s 2025-07-29 00:19:42.330681 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.82s 2025-07-29 00:19:42.330691 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.70s 2025-07-29 00:19:42.330702 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.69s 2025-07-29 00:19:42.330713 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.48s 2025-07-29 00:19:42.330724 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 1.07s 2025-07-29 00:19:42.330753 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.75s 2025-07-29 00:19:42.330765 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2025-07-29 00:19:42.330776 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-07-29 00:19:42.330786 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-29 00:19:42.330797 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-07-29 00:19:42.330808 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-29 00:19:42.330819 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-07-29 00:19:42.670984 | orchestrator | + osism apply sshconfig 2025-07-29 00:19:54.787181 | orchestrator | 2025-07-29 00:19:54 | INFO  | Task d206557b-4f84-45c2-a598-8a677cc0c194 (sshconfig) was prepared for execution. 2025-07-29 00:19:54.787293 | orchestrator | 2025-07-29 00:19:54 | INFO  | It takes a moment until task d206557b-4f84-45c2-a598-8a677cc0c194 (sshconfig) has been started and output is visible here. 2025-07-29 00:20:10.972106 | orchestrator | 2025-07-29 00:20:10.972203 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-29 00:20:10.972218 | orchestrator | 2025-07-29 00:20:10.972230 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-29 00:20:10.972241 | orchestrator | Tuesday 29 July 2025 00:20:00 +0000 (0:00:00.110) 0:00:00.110 ********** 2025-07-29 00:20:10.972248 | orchestrator | ok: [testbed-manager] 2025-07-29 00:20:10.972257 | orchestrator | 2025-07-29 00:20:10.972263 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-29 00:20:10.972270 | orchestrator | Tuesday 29 July 2025 00:20:01 +0000 (0:00:00.721) 0:00:00.832 ********** 2025-07-29 00:20:10.972276 | orchestrator | changed: [testbed-manager] 2025-07-29 00:20:10.972283 | orchestrator | 2025-07-29 00:20:10.972289 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-29 00:20:10.972295 | orchestrator | Tuesday 29 July 2025 00:20:01 +0000 (0:00:00.810) 0:00:01.642 ********** 2025-07-29 00:20:10.972301 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-29 00:20:10.972308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-29 00:20:10.972314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-29 00:20:10.972362 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-29 00:20:10.972369 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-29 00:20:10.972375 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-29 00:20:10.972399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-29 00:20:10.972406 | orchestrator | 2025-07-29 00:20:10.972413 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-29 00:20:10.972419 | orchestrator | Tuesday 29 July 2025 00:20:09 +0000 (0:00:07.722) 0:00:09.365 ********** 2025-07-29 00:20:10.972446 | orchestrator | skipping: [testbed-manager] 2025-07-29 00:20:10.972452 | orchestrator | 2025-07-29 00:20:10.972458 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-29 00:20:10.972465 | orchestrator | Tuesday 29 July 2025 00:20:09 +0000 (0:00:00.062) 0:00:09.427 ********** 2025-07-29 00:20:10.972471 | orchestrator | changed: [testbed-manager] 2025-07-29 00:20:10.972477 | orchestrator | 2025-07-29 00:20:10.972483 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-29 00:20:10.972490 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-29 00:20:10.972498 | orchestrator | 2025-07-29 00:20:10.972504 | orchestrator | 2025-07-29 00:20:10.972510 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-29 00:20:10.972516 | orchestrator | Tuesday 29 July 2025 00:20:10 +0000 (0:00:00.804) 0:00:10.232 ********** 2025-07-29 00:20:10.972523 | orchestrator | =============================================================================== 2025-07-29 00:20:10.972529 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 7.72s 2025-07-29 00:20:10.972535 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.81s 2025-07-29 00:20:10.972541 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.80s 2025-07-29 00:20:10.972547 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.72s 2025-07-29 00:20:10.972554 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-07-29 00:20:11.248500 | orchestrator | + osism apply known-hosts 2025-07-29 00:20:23.144493 | orchestrator | 2025-07-29 00:20:23 | INFO  | Task 3ec01bec-9af1-44d1-9cbf-07fc3974023e (known-hosts) was prepared for execution. 2025-07-29 00:20:23.144599 | orchestrator | 2025-07-29 00:20:23 | INFO  | It takes a moment until task 3ec01bec-9af1-44d1-9cbf-07fc3974023e (known-hosts) has been started and output is visible here. 2025-07-29 00:20:36.549199 | orchestrator | 2025-07-29 00:20:36 | INFO  | Task 27646de0-b1af-44ae-858a-9bc04cd15810 (known-hosts) was prepared for execution. 2025-07-29 00:20:36.549424 | orchestrator | 2025-07-29 00:20:36 | INFO  | It takes a moment until task 27646de0-b1af-44ae-858a-9bc04cd15810 (known-hosts) has been started and output is visible here. 2025-07-29 00:20:49.137965 | orchestrator | 2025-07-29 00:20:49.138198 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-29 00:20:49.138223 | orchestrator | 2025-07-29 00:20:49.138236 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-29 00:20:49.138248 | orchestrator | Tuesday 29 July 2025 00:20:28 +0000 (0:00:00.109) 0:00:00.109 ********** 2025-07-29 00:20:49.138261 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-29 00:20:49.138272 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-29 00:20:49.138311 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-29 00:20:49.138323 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-29 00:20:49.138334 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-29 00:20:49.138345 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-29 00:20:49.138356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-29 00:20:49.138367 | orchestrator | 2025-07-29 00:20:49.138378 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-29 00:20:49.138391 | orchestrator | Tuesday 29 July 2025 00:20:35 +0000 (0:00:06.836) 0:00:06.946 ********** 2025-07-29 00:20:49.138403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-29 00:20:49.138415 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-29 00:20:49.138452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-29 00:20:49.138477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-29 00:20:49.138491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-29 00:20:49.138503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-29 00:20:49.138516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-29 00:20:49.138527 | orchestrator | 2025-07-29 00:20:49.138540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-29 00:20:49.138553 | orchestrator | Tuesday 29 July 2025 00:20:35 +0000 (0:00:00.182) 0:00:07.128 ********** 2025-07-29 00:20:49.138566 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-29 00:20:49.138580 | orchestrator |  2025-07-29 00:20:49.138593 | orchestrator | Task failed. 2025-07-29 00:20:49.138605 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-29 00:20:49.138618 | orchestrator |  2025-07-29 00:20:49.138630 | orchestrator | 1 --- 2025-07-29 00:20:49.138643 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-29 00:20:49.138655 | orchestrator |  ^ column 3 2025-07-29 00:20:49.138667 | orchestrator |  2025-07-29 00:20:49.138679 | orchestrator | <<< caused by >>> 2025-07-29 00:20:49.138691 | orchestrator |  2025-07-29 00:20:49.138703 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-29 00:20:49.138716 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-29 00:20:49.138727 | orchestrator |  2025-07-29 00:20:49.138739 | orchestrator | 10 when: 2025-07-29 00:20:49.138751 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-29 00:20:49.138765 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-29 00:20:49.138778 | orchestrator |  ^ column 7 2025-07-29 00:20:49.138790 | orchestrator |  2025-07-29 00:20:49.138803 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-29 00:20:49.138815 | orchestrator |  2025-07-29 00:20:49.138827 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuGTTOOWA/23Zo6VUl9Yz3MbNcsLtZ+wrQ4hDi+mCEAKx7Q6GUQVkJIxN3EEarijIj4BUKPObBNzxhhpPfU5og=) => changed=false  2025-07-29 00:20:49.138840 | orchestrator |  ansible_loop_var: inner_item 2025-07-29 00:20:49.138851 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuGTTOOWA/23Zo6VUl9Yz3MbNcsLtZ+wrQ4hDi+mCEAKx7Q6GUQVkJIxN3EEarijIj4BUKPObBNzxhhpPfU5og= 2025-07-29 00:20:49.138863 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-29 00:20:49.138897 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjJsl8w+SsBhALbDq/+6qzXxQ7c8ZMN4du5lPNVG+bAH0h2yBvZICgzpxHDfKInsE2ny69207pNsYT90ZUTcyCl/5/PEDme+AdFIozLa3voEqbfyrJoV46Fx/KCPgexvrc/l1+fihoEGvW1XwS6nGhr1Ndoe+hWPRYfgiKKh8d0mTpReoMIdN4haS5FnjrW7GNEMKTgWIlawzPZO367IHeZMOFCysmg8LXEG9GN3zkexZrclwQsmJ9+amHd/64z8Kg7tMq/3JTAE0mS+TjhM3nGkNWg0WFekUx2x3/qTFqPCFA25MRGgMfvhLh/o/i2fSLIfFu3Jqm+6hRRaoNpBXWQck1Cgvlpnc2/fZTSIiCOoRUzThl9ojtv1KYVms92eo3Q8IyH5vN58seRzXRvSluSNE3eEsPFMdKplOHEF/wnWLCjFMIRrOGsI60ihMV5G1AHZdmGtxJwne81l5uwGusgqkxuzYNUS9j39wNCczivYkGU5Z4oDOLHEzqnA9xcQk=) => changed=false  2025-07-29 00:20:49.138920 | orchestrator |  ansible_loop_var: inner_item 2025-07-29 00:20:49.138932 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjJsl8w+SsBhALbDq/+6qzXxQ7c8ZMN4du5lPNVG+bAH0h2yBvZICgzpxHDfKInsE2ny69207pNsYT90ZUTcyCl/5/PEDme+AdFIozLa3voEqbfyrJoV46Fx/KCPgexvrc/l1+fihoEGvW1XwS6nGhr1Ndoe+hWPRYfgiKKh8d0mTpReoMIdN4haS5FnjrW7GNEMKTgWIlawzPZO367IHeZMOFCysmg8LXEG9GN3zkexZrclwQsmJ9+amHd/64z8Kg7tMq/3JTAE0mS+TjhM3nGkNWg0WFekUx2x3/qTFqPCFA25MRGgMfvhLh/o/i2fSLIfFu3Jqm+6hRRaoNpBXWQck1Cgvlpnc2/fZTSIiCOoRUzThl9ojtv1KYVms92eo3Q8IyH5vN58seRzXRvSluSNE3eEsPFMdKplOHEF/wnWLCjFMIRrOGsI60ihMV5G1AHZdmGtxJwne81l5uwGusgqkxuzYNUS9j39wNCczivYkGU5Z4oDOLHEzqnA9xcQk= 2025-07-29 00:20:49.138944 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-29 00:20:49.139018 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsUQUWJfvwZ73sfOEC/HExwmUHWKv4mq2/Bhjj68+/l) => changed=false  2025-07-29 00:20:49.139030 | orchestrator |  ansible_loop_var: inner_item 2025-07-29 00:20:49.139041 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsUQUWJfvwZ73sfOEC/HExwmUHWKv4mq2/Bhjj68+/l 2025-07-29 00:20:49.139053 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-29 00:20:49.139063 | orchestrator | 2025-07-29 00:20:49.139074 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-29 00:20:49.139086 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-29 00:20:49.139097 | orchestrator | 2025-07-29 00:20:49.139107 | orchestrator | 2025-07-29 00:20:49.139118 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-29 00:20:49.139129 | orchestrator | Tuesday 29 July 2025 00:20:35 +0000 (0:00:00.096) 0:00:07.225 ********** 2025-07-29 00:20:49.139140 | orchestrator | =============================================================================== 2025-07-29 00:20:49.139150 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.84s 2025-07-29 00:20:49.139161 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-07-29 00:20:49.139171 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-29 00:20:49.139182 | orchestrator | 2025-07-29 00:20:49.139193 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-29 00:20:49.139203 | orchestrator | 2025-07-29 00:20:49.139213 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-29 00:20:49.139224 | orchestrator | Tuesday 29 July 2025 00:20:42 +0000 (0:00:00.150) 0:00:00.151 ********** 2025-07-29 00:20:49.139235 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-29 00:20:49.139245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-29 00:20:49.139256 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-29 00:20:49.139267 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-29 00:20:49.139299 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-29 00:20:49.139310 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-29 00:20:49.139321 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-29 00:20:49.139332 | orchestrator | 2025-07-29 00:20:49.139342 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-29 00:20:49.139353 | orchestrator | Tuesday 29 July 2025 00:20:48 +0000 (0:00:06.419) 0:00:06.570 ********** 2025-07-29 00:20:49.139377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-29 00:20:49.139397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-29 00:20:49.139416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-29 00:20:49.139448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-29 00:20:49.729856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-29 00:20:49.729963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-29 00:20:49.729977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-29 00:20:49.729989 | orchestrator | 2025-07-29 00:20:49.730001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-29 00:20:49.730013 | orchestrator | Tuesday 29 July 2025 00:20:49 +0000 (0:00:00.187) 0:00:06.758 ********** 2025-07-29 00:20:49.730087 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-29 00:20:49.730099 | orchestrator |  2025-07-29 00:20:49.730111 | orchestrator | Task failed. 2025-07-29 00:20:49.730123 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-29 00:20:49.730134 | orchestrator |  2025-07-29 00:20:49.730145 | orchestrator | 1 --- 2025-07-29 00:20:49.730156 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-29 00:20:49.730167 | orchestrator |  ^ column 3 2025-07-29 00:20:49.730178 | orchestrator |  2025-07-29 00:20:49.730188 | orchestrator | <<< caused by >>> 2025-07-29 00:20:49.730199 | orchestrator |  2025-07-29 00:20:49.730211 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-29 00:20:49.730222 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-29 00:20:49.730233 | orchestrator |  2025-07-29 00:20:49.730243 | orchestrator | 10 when: 2025-07-29 00:20:49.730255 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-29 00:20:49.730266 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-29 00:20:49.730336 | orchestrator |  ^ column 7 2025-07-29 00:20:49.730349 | orchestrator |  2025-07-29 00:20:49.730380 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-29 00:20:49.730393 | orchestrator |  2025-07-29 00:20:49.730406 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsUQUWJfvwZ73sfOEC/HExwmUHWKv4mq2/Bhjj68+/l) => changed=false  2025-07-29 00:20:49.730419 | orchestrator |  ansible_loop_var: inner_item 2025-07-29 00:20:49.730432 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsUQUWJfvwZ73sfOEC/HExwmUHWKv4mq2/Bhjj68+/l 2025-07-29 00:20:49.730445 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-29 00:20:49.730483 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjJsl8w+SsBhALbDq/+6qzXxQ7c8ZMN4du5lPNVG+bAH0h2yBvZICgzpxHDfKInsE2ny69207pNsYT90ZUTcyCl/5/PEDme+AdFIozLa3voEqbfyrJoV46Fx/KCPgexvrc/l1+fihoEGvW1XwS6nGhr1Ndoe+hWPRYfgiKKh8d0mTpReoMIdN4haS5FnjrW7GNEMKTgWIlawzPZO367IHeZMOFCysmg8LXEG9GN3zkexZrclwQsmJ9+amHd/64z8Kg7tMq/3JTAE0mS+TjhM3nGkNWg0WFekUx2x3/qTFqPCFA25MRGgMfvhLh/o/i2fSLIfFu3Jqm+6hRRaoNpBXWQck1Cgvlpnc2/fZTSIiCOoRUzThl9ojtv1KYVms92eo3Q8IyH5vN58seRzXRvSluSNE3eEsPFMdKplOHEF/wnWLCjFMIRrOGsI60ihMV5G1AHZdmGtxJwne81l5uwGusgqkxuzYNUS9j39wNCczivYkGU5Z4oDOLHEzqnA9xcQk=) => changed=false  2025-07-29 00:20:49.730500 | orchestrator |  ansible_loop_var: inner_item 2025-07-29 00:20:49.730513 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjJsl8w+SsBhALbDq/+6qzXxQ7c8ZMN4du5lPNVG+bAH0h2yBvZICgzpxHDfKInsE2ny69207pNsYT90ZUTcyCl/5/PEDme+AdFIozLa3voEqbfyrJoV46Fx/KCPgexvrc/l1+fihoEGvW1XwS6nGhr1Ndoe+hWPRYfgiKKh8d0mTpReoMIdN4haS5FnjrW7GNEMKTgWIlawzPZO367IHeZMOFCysmg8LXEG9GN3zkexZrclwQsmJ9+amHd/64z8Kg7tMq/3JTAE0mS+TjhM3nGkNWg0WFekUx2x3/qTFqPCFA25MRGgMfvhLh/o/i2fSLIfFu3Jqm+6hRRaoNpBXWQck1Cgvlpnc2/fZTSIiCOoRUzThl9ojtv1KYVms92eo3Q8IyH5vN58seRzXRvSluSNE3eEsPFMdKplOHEF/wnWLCjFMIRrOGsI60ihMV5G1AHZdmGtxJwne81l5uwGusgqkxuzYNUS9j39wNCczivYkGU5Z4oDOLHEzqnA9xcQk= 2025-07-29 00:20:49.730526 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-29 00:20:49.730540 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuGTTOOWA/23Zo6VUl9Yz3MbNcsLtZ+wrQ4hDi+mCEAKx7Q6GUQVkJIxN3EEarijIj4BUKPObBNzxhhpPfU5og=) => changed=false  2025-07-29 00:20:49.730554 | orchestrator |  ansible_loop_var: inner_item 2025-07-29 00:20:49.730586 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuGTTOOWA/23Zo6VUl9Yz3MbNcsLtZ+wrQ4hDi+mCEAKx7Q6GUQVkJIxN3EEarijIj4BUKPObBNzxhhpPfU5og= 2025-07-29 00:20:49.730599 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-29 00:20:49.730612 | orchestrator | 2025-07-29 00:20:49.730625 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-29 00:20:49.730638 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-29 00:20:49.730650 | orchestrator | 2025-07-29 00:20:49.730662 | orchestrator | 2025-07-29 00:20:49.730675 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-29 00:20:49.730688 | orchestrator | Tuesday 29 July 2025 00:20:49 +0000 (0:00:00.095) 0:00:06.853 ********** 2025-07-29 00:20:49.730700 | orchestrator | =============================================================================== 2025-07-29 00:20:49.730712 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.42s 2025-07-29 00:20:49.730725 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-07-29 00:20:49.730737 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-29 00:20:50.191918 | orchestrator | ERROR 2025-07-29 00:20:50.192396 | orchestrator | { 2025-07-29 00:20:50.192516 | orchestrator | "delta": "0:05:52.878574", 2025-07-29 00:20:50.192587 | orchestrator | "end": "2025-07-29 00:20:49.990434", 2025-07-29 00:20:50.192648 | orchestrator | "msg": "non-zero return code", 2025-07-29 00:20:50.192703 | orchestrator | "rc": 2, 2025-07-29 00:20:50.192756 | orchestrator | "start": "2025-07-29 00:14:57.111860" 2025-07-29 00:20:50.192806 | orchestrator | } failure 2025-07-29 00:20:50.216894 | 2025-07-29 00:20:50.217003 | PLAY RECAP 2025-07-29 00:20:50.217073 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-07-29 00:20:50.217421 | 2025-07-29 00:20:50.377502 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-29 00:20:50.379851 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-29 00:20:51.129024 | 2025-07-29 00:20:51.129272 | PLAY [Post output play] 2025-07-29 00:20:51.145917 | 2025-07-29 00:20:51.146070 | LOOP [stage-output : Register sources] 2025-07-29 00:20:51.208476 | 2025-07-29 00:20:51.208705 | TASK [stage-output : Check sudo] 2025-07-29 00:20:52.063915 | orchestrator | sudo: a password is required 2025-07-29 00:20:52.243713 | orchestrator | ok: Runtime: 0:00:00.016331 2025-07-29 00:20:52.259350 | 2025-07-29 00:20:52.259515 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-29 00:20:52.299176 | 2025-07-29 00:20:52.299496 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-29 00:20:52.368537 | orchestrator | ok 2025-07-29 00:20:52.377999 | 2025-07-29 00:20:52.378137 | LOOP [stage-output : Ensure target folders exist] 2025-07-29 00:20:52.835459 | orchestrator | ok: "docs" 2025-07-29 00:20:52.835794 | 2025-07-29 00:20:53.082448 | orchestrator | ok: "artifacts" 2025-07-29 00:20:53.335481 | orchestrator | ok: "logs" 2025-07-29 00:20:53.354055 | 2025-07-29 00:20:53.354227 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-29 00:20:53.387035 | 2025-07-29 00:20:53.387238 | TASK [stage-output : Make all log files readable] 2025-07-29 00:20:53.678045 | orchestrator | ok 2025-07-29 00:20:53.687764 | 2025-07-29 00:20:53.687895 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-29 00:20:53.722680 | orchestrator | skipping: Conditional result was False 2025-07-29 00:20:53.737065 | 2025-07-29 00:20:53.737210 | TASK [stage-output : Discover log files for compression] 2025-07-29 00:20:53.761947 | orchestrator | skipping: Conditional result was False 2025-07-29 00:20:53.776138 | 2025-07-29 00:20:53.776315 | LOOP [stage-output : Archive everything from logs] 2025-07-29 00:20:53.818884 | 2025-07-29 00:20:53.819037 | PLAY [Post cleanup play] 2025-07-29 00:20:53.826942 | 2025-07-29 00:20:53.827043 | TASK [Set cloud fact (Zuul deployment)] 2025-07-29 00:20:53.885651 | orchestrator | ok 2025-07-29 00:20:53.897625 | 2025-07-29 00:20:53.897756 | TASK [Set cloud fact (local deployment)] 2025-07-29 00:20:53.932434 | orchestrator | skipping: Conditional result was False 2025-07-29 00:20:53.947545 | 2025-07-29 00:20:53.947685 | TASK [Clean the cloud environment] 2025-07-29 00:20:54.531533 | orchestrator | 2025-07-29 00:20:54 - clean up servers 2025-07-29 00:20:55.265916 | orchestrator | 2025-07-29 00:20:55 - testbed-manager 2025-07-29 00:20:55.348971 | orchestrator | 2025-07-29 00:20:55 - testbed-node-5 2025-07-29 00:20:55.435822 | orchestrator | 2025-07-29 00:20:55 - testbed-node-3 2025-07-29 00:20:55.535143 | orchestrator | 2025-07-29 00:20:55 - testbed-node-0 2025-07-29 00:20:55.632207 | orchestrator | 2025-07-29 00:20:55 - testbed-node-1 2025-07-29 00:20:55.723438 | orchestrator | 2025-07-29 00:20:55 - testbed-node-4 2025-07-29 00:20:55.834183 | orchestrator | 2025-07-29 00:20:55 - testbed-node-2 2025-07-29 00:20:55.928450 | orchestrator | 2025-07-29 00:20:55 - clean up keypairs 2025-07-29 00:20:55.949953 | orchestrator | 2025-07-29 00:20:55 - testbed 2025-07-29 00:20:55.977361 | orchestrator | 2025-07-29 00:20:55 - wait for servers to be gone 2025-07-29 00:21:06.872951 | orchestrator | 2025-07-29 00:21:06 - clean up ports 2025-07-29 00:21:07.059171 | orchestrator | 2025-07-29 00:21:07 - 0f42e152-ffac-449b-b7ad-2a39bc26dd44 2025-07-29 00:21:07.318720 | orchestrator | 2025-07-29 00:21:07 - 1bab64b2-7a89-4b02-8c0e-cdd6a6cf4c50 2025-07-29 00:21:07.608118 | orchestrator | 2025-07-29 00:21:07 - 2f081021-198a-4cbc-bd7f-3b769825b3bc 2025-07-29 00:21:07.825724 | orchestrator | 2025-07-29 00:21:07 - 62ef0d41-6fd8-495e-a051-7f48574d5bbd 2025-07-29 00:21:08.073417 | orchestrator | 2025-07-29 00:21:08 - 70fc26b0-5220-4a9a-a265-840f0e9385f2 2025-07-29 00:21:08.490742 | orchestrator | 2025-07-29 00:21:08 - 8d1ebf95-cfd2-4ffc-be1f-17615c8591fc 2025-07-29 00:21:08.721073 | orchestrator | 2025-07-29 00:21:08 - c1b4c9ed-3eb2-42e3-aaef-6e976317a9f4 2025-07-29 00:21:08.930629 | orchestrator | 2025-07-29 00:21:08 - clean up volumes 2025-07-29 00:21:09.054908 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-0-node-base 2025-07-29 00:21:09.100753 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-5-node-base 2025-07-29 00:21:09.144715 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-1-node-base 2025-07-29 00:21:09.188460 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-4-node-base 2025-07-29 00:21:09.235017 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-3-node-base 2025-07-29 00:21:09.278462 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-2-node-base 2025-07-29 00:21:09.320223 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-manager-base 2025-07-29 00:21:09.365072 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-2-node-5 2025-07-29 00:21:09.408574 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-7-node-4 2025-07-29 00:21:09.452808 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-1-node-4 2025-07-29 00:21:09.497367 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-6-node-3 2025-07-29 00:21:09.544908 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-5-node-5 2025-07-29 00:21:09.588184 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-4-node-4 2025-07-29 00:21:09.630365 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-3-node-3 2025-07-29 00:21:09.678162 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-0-node-3 2025-07-29 00:21:09.726551 | orchestrator | 2025-07-29 00:21:09 - testbed-volume-8-node-5 2025-07-29 00:21:09.783240 | orchestrator | 2025-07-29 00:21:09 - disconnect routers 2025-07-29 00:21:09.924231 | orchestrator | 2025-07-29 00:21:09 - testbed 2025-07-29 00:21:10.831542 | orchestrator | 2025-07-29 00:21:10 - clean up subnets 2025-07-29 00:21:10.875089 | orchestrator | 2025-07-29 00:21:10 - subnet-testbed-management 2025-07-29 00:21:11.053102 | orchestrator | 2025-07-29 00:21:11 - clean up networks 2025-07-29 00:21:11.231127 | orchestrator | 2025-07-29 00:21:11 - net-testbed-management 2025-07-29 00:21:11.559764 | orchestrator | 2025-07-29 00:21:11 - clean up security groups 2025-07-29 00:21:12.076335 | orchestrator | 2025-07-29 00:21:12 - testbed-management 2025-07-29 00:21:12.194733 | orchestrator | 2025-07-29 00:21:12 - testbed-node 2025-07-29 00:21:12.331603 | orchestrator | 2025-07-29 00:21:12 - clean up floating ips 2025-07-29 00:21:12.368369 | orchestrator | 2025-07-29 00:21:12 - 81.163.193.1 2025-07-29 00:21:12.721935 | orchestrator | 2025-07-29 00:21:12 - clean up routers 2025-07-29 00:21:12.833529 | orchestrator | 2025-07-29 00:21:12 - testbed 2025-07-29 00:21:14.500431 | orchestrator | ok: Runtime: 0:00:19.886248 2025-07-29 00:21:14.504750 | 2025-07-29 00:21:14.504914 | PLAY RECAP 2025-07-29 00:21:14.505039 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-29 00:21:14.505094 | 2025-07-29 00:21:14.638720 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-29 00:21:14.641166 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-29 00:21:15.378963 | 2025-07-29 00:21:15.379133 | PLAY [Cleanup play] 2025-07-29 00:21:15.396423 | 2025-07-29 00:21:15.396575 | TASK [Set cloud fact (Zuul deployment)] 2025-07-29 00:21:15.462082 | orchestrator | ok 2025-07-29 00:21:15.472300 | 2025-07-29 00:21:15.472465 | TASK [Set cloud fact (local deployment)] 2025-07-29 00:21:15.507085 | orchestrator | skipping: Conditional result was False 2025-07-29 00:21:15.524157 | 2025-07-29 00:21:15.524315 | TASK [Clean the cloud environment] 2025-07-29 00:21:16.679848 | orchestrator | 2025-07-29 00:21:16 - clean up servers 2025-07-29 00:21:17.152130 | orchestrator | 2025-07-29 00:21:17 - clean up keypairs 2025-07-29 00:21:17.172375 | orchestrator | 2025-07-29 00:21:17 - wait for servers to be gone 2025-07-29 00:21:17.216454 | orchestrator | 2025-07-29 00:21:17 - clean up ports 2025-07-29 00:21:17.289175 | orchestrator | 2025-07-29 00:21:17 - clean up volumes 2025-07-29 00:21:17.350182 | orchestrator | 2025-07-29 00:21:17 - disconnect routers 2025-07-29 00:21:17.380062 | orchestrator | 2025-07-29 00:21:17 - clean up subnets 2025-07-29 00:21:17.397191 | orchestrator | 2025-07-29 00:21:17 - clean up networks 2025-07-29 00:21:17.566673 | orchestrator | 2025-07-29 00:21:17 - clean up security groups 2025-07-29 00:21:17.603585 | orchestrator | 2025-07-29 00:21:17 - clean up floating ips 2025-07-29 00:21:17.631532 | orchestrator | 2025-07-29 00:21:17 - clean up routers 2025-07-29 00:21:18.062961 | orchestrator | ok: Runtime: 0:00:01.386372 2025-07-29 00:21:18.066874 | 2025-07-29 00:21:18.067040 | PLAY RECAP 2025-07-29 00:21:18.067159 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-29 00:21:18.067223 | 2025-07-29 00:21:18.192187 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-29 00:21:18.194714 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-29 00:21:18.922009 | 2025-07-29 00:21:18.922184 | PLAY [Base post-fetch] 2025-07-29 00:21:18.937564 | 2025-07-29 00:21:18.937695 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-29 00:21:18.993710 | orchestrator | skipping: Conditional result was False 2025-07-29 00:21:19.008015 | 2025-07-29 00:21:19.008210 | TASK [fetch-output : Set log path for single node] 2025-07-29 00:21:19.048803 | orchestrator | ok 2025-07-29 00:21:19.057932 | 2025-07-29 00:21:19.058071 | LOOP [fetch-output : Ensure local output dirs] 2025-07-29 00:21:19.539429 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/work/logs" 2025-07-29 00:21:19.814423 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/work/artifacts" 2025-07-29 00:21:20.096803 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/38c167d54a044001a5a1ff61df7ed5b1/work/docs" 2025-07-29 00:21:20.118085 | 2025-07-29 00:21:20.118244 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-29 00:21:21.055114 | orchestrator | changed: .d..t...... ./ 2025-07-29 00:21:21.055486 | orchestrator | changed: All items complete 2025-07-29 00:21:21.055541 | 2025-07-29 00:21:21.770223 | orchestrator | changed: .d..t...... ./ 2025-07-29 00:21:22.533338 | orchestrator | changed: .d..t...... ./ 2025-07-29 00:21:22.565763 | 2025-07-29 00:21:22.566756 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-29 00:21:22.603699 | orchestrator | skipping: Conditional result was False 2025-07-29 00:21:22.606177 | orchestrator | skipping: Conditional result was False 2025-07-29 00:21:22.631758 | 2025-07-29 00:21:22.631881 | PLAY RECAP 2025-07-29 00:21:22.631960 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-29 00:21:22.632002 | 2025-07-29 00:21:22.756155 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-29 00:21:22.757174 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-29 00:21:23.493781 | 2025-07-29 00:21:23.493936 | PLAY [Base post] 2025-07-29 00:21:23.508158 | 2025-07-29 00:21:23.508322 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-29 00:21:24.495699 | orchestrator | changed 2025-07-29 00:21:24.503911 | 2025-07-29 00:21:24.504024 | PLAY RECAP 2025-07-29 00:21:24.504087 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-29 00:21:24.504150 | 2025-07-29 00:21:24.625741 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-29 00:21:24.628409 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-29 00:21:25.435688 | 2025-07-29 00:21:25.435869 | PLAY [Base post-logs] 2025-07-29 00:21:25.446966 | 2025-07-29 00:21:25.447125 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-29 00:21:25.902822 | localhost | changed 2025-07-29 00:21:25.920532 | 2025-07-29 00:21:25.920728 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-29 00:21:25.959012 | localhost | ok 2025-07-29 00:21:25.964359 | 2025-07-29 00:21:25.964502 | TASK [Set zuul-log-path fact] 2025-07-29 00:21:25.982314 | localhost | ok 2025-07-29 00:21:25.995741 | 2025-07-29 00:21:25.995898 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-29 00:21:26.024906 | localhost | ok 2025-07-29 00:21:26.031459 | 2025-07-29 00:21:26.031609 | TASK [upload-logs : Create log directories] 2025-07-29 00:21:26.553294 | localhost | changed 2025-07-29 00:21:26.559228 | 2025-07-29 00:21:26.559474 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-29 00:21:27.063826 | localhost -> localhost | ok: Runtime: 0:00:00.007555 2025-07-29 00:21:27.073560 | 2025-07-29 00:21:27.073790 | TASK [upload-logs : Upload logs to log server] 2025-07-29 00:21:27.629345 | localhost | Output suppressed because no_log was given 2025-07-29 00:21:27.632025 | 2025-07-29 00:21:27.632184 | LOOP [upload-logs : Compress console log and json output] 2025-07-29 00:21:27.689867 | localhost | skipping: Conditional result was False 2025-07-29 00:21:27.695316 | localhost | skipping: Conditional result was False 2025-07-29 00:21:27.706444 | 2025-07-29 00:21:27.706646 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-29 00:21:27.752417 | localhost | skipping: Conditional result was False 2025-07-29 00:21:27.753108 | 2025-07-29 00:21:27.756445 | localhost | skipping: Conditional result was False 2025-07-29 00:21:27.770406 | 2025-07-29 00:21:27.770673 | LOOP [upload-logs : Upload console log and json output]