2025-04-17 00:00:10.153305 | Job console starting... 2025-04-17 00:00:10.197237 | Updating repositories 2025-04-17 00:00:10.839509 | Preparing job workspace 2025-04-17 00:00:13.185914 | Running Ansible setup... 2025-04-17 00:00:21.215668 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-17 00:00:22.658806 | 2025-04-17 00:00:22.658964 | PLAY [Base pre] 2025-04-17 00:00:22.699339 | 2025-04-17 00:00:22.699465 | TASK [Setup log path fact] 2025-04-17 00:00:22.760946 | orchestrator | ok 2025-04-17 00:00:22.815179 | 2025-04-17 00:00:22.815311 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-17 00:00:22.910792 | orchestrator | ok 2025-04-17 00:00:22.954515 | 2025-04-17 00:00:22.954625 | TASK [emit-job-header : Print job information] 2025-04-17 00:00:23.062156 | # Job Information 2025-04-17 00:00:23.062313 | Ansible Version: 2.15.3 2025-04-17 00:00:23.062348 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-04-17 00:00:23.062377 | Pipeline: periodic-midnight 2025-04-17 00:00:23.062398 | Executor: 7d211f194f6a 2025-04-17 00:00:23.062417 | Triggered by: https://github.com/osism/testbed 2025-04-17 00:00:23.062436 | Event ID: 365c01c298aa48aea1db3d8d4e05ae00 2025-04-17 00:00:23.085602 | 2025-04-17 00:00:23.085725 | LOOP [emit-job-header : Print node information] 2025-04-17 00:00:23.378630 | orchestrator | ok: 2025-04-17 00:00:23.378796 | orchestrator | # Node Information 2025-04-17 00:00:23.378841 | orchestrator | Inventory Hostname: orchestrator 2025-04-17 00:00:23.378866 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-17 00:00:23.378889 | orchestrator | Username: zuul-testbed03 2025-04-17 00:00:23.378910 | orchestrator | Distro: Debian 12.10 2025-04-17 00:00:23.378934 | orchestrator | Provider: static-testbed 2025-04-17 00:00:23.378995 | orchestrator | Label: testbed-orchestrator 2025-04-17 00:00:23.379018 | orchestrator | Product Name: OpenStack Nova 2025-04-17 00:00:23.379039 | orchestrator | Interface IP: 81.163.193.140 2025-04-17 00:00:23.406593 | 2025-04-17 00:00:23.406710 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-17 00:00:24.250126 | orchestrator -> localhost | changed 2025-04-17 00:00:24.269011 | 2025-04-17 00:00:24.269121 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-17 00:00:26.655410 | orchestrator -> localhost | changed 2025-04-17 00:00:26.675620 | 2025-04-17 00:00:26.675752 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-17 00:00:27.301753 | orchestrator -> localhost | ok 2025-04-17 00:00:27.308968 | 2025-04-17 00:00:27.309076 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-17 00:00:27.415491 | orchestrator | ok 2025-04-17 00:00:27.433076 | orchestrator | included: /var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-17 00:00:27.456519 | 2025-04-17 00:00:27.456623 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-17 00:00:28.884677 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-17 00:00:28.885746 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/work/4c4b8d7ec84f46ec8e44de13e39c3e5a_id_rsa 2025-04-17 00:00:28.885816 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/work/4c4b8d7ec84f46ec8e44de13e39c3e5a_id_rsa.pub 2025-04-17 00:00:28.885866 | orchestrator -> localhost | The key fingerprint is: 2025-04-17 00:00:28.885888 | orchestrator -> localhost | SHA256:YyZGk2Gc1Dp/SXP21sCjy0E98DfAVWi5SEc7zB92ofg zuul-build-sshkey 2025-04-17 00:00:28.885906 | orchestrator -> localhost | The key's randomart image is: 2025-04-17 00:00:28.885923 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-17 00:00:28.885939 | orchestrator -> localhost | | o+o ..o=o| 2025-04-17 00:00:28.885955 | orchestrator -> localhost | | .oo. +=*..| 2025-04-17 00:00:28.885979 | orchestrator -> localhost | | +. o X*+.| 2025-04-17 00:00:28.885996 | orchestrator -> localhost | | .o. o * X++| 2025-04-17 00:00:28.886011 | orchestrator -> localhost | | ooS. * E *o| 2025-04-17 00:00:28.886027 | orchestrator -> localhost | | . +..o o o .| 2025-04-17 00:00:28.886052 | orchestrator -> localhost | | . . + | 2025-04-17 00:00:28.886069 | orchestrator -> localhost | | o | 2025-04-17 00:00:28.886085 | orchestrator -> localhost | | | 2025-04-17 00:00:28.886101 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-17 00:00:28.886150 | orchestrator -> localhost | ok: Runtime: 0:00:00.510345 2025-04-17 00:00:28.894448 | 2025-04-17 00:00:28.894531 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-17 00:00:28.936806 | orchestrator | ok 2025-04-17 00:00:28.961759 | orchestrator | included: /var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-17 00:00:28.989504 | 2025-04-17 00:00:28.989583 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-17 00:00:29.055158 | orchestrator | skipping: Conditional result was False 2025-04-17 00:00:29.062863 | 2025-04-17 00:00:29.063035 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-17 00:00:29.692944 | orchestrator | changed 2025-04-17 00:00:29.699663 | 2025-04-17 00:00:29.699746 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-17 00:00:29.947362 | orchestrator | ok 2025-04-17 00:00:29.959474 | 2025-04-17 00:00:29.959570 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-17 00:00:30.481920 | orchestrator | ok 2025-04-17 00:00:30.490095 | 2025-04-17 00:00:30.490181 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-17 00:00:30.856691 | orchestrator | ok 2025-04-17 00:00:30.873016 | 2025-04-17 00:00:30.873109 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-17 00:00:30.926371 | orchestrator | skipping: Conditional result was False 2025-04-17 00:00:30.936096 | 2025-04-17 00:00:30.936194 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-17 00:00:31.357041 | orchestrator -> localhost | changed 2025-04-17 00:00:31.380070 | 2025-04-17 00:00:31.380190 | TASK [add-build-sshkey : Add back temp key] 2025-04-17 00:00:31.891655 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/work/4c4b8d7ec84f46ec8e44de13e39c3e5a_id_rsa (zuul-build-sshkey) 2025-04-17 00:00:31.891890 | orchestrator -> localhost | ok: Runtime: 0:00:00.029992 2025-04-17 00:00:31.900373 | 2025-04-17 00:00:31.900514 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-17 00:00:32.325916 | orchestrator | ok 2025-04-17 00:00:32.337281 | 2025-04-17 00:00:32.337386 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-17 00:00:32.377896 | orchestrator | skipping: Conditional result was False 2025-04-17 00:00:32.398990 | 2025-04-17 00:00:32.399096 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-17 00:00:32.751480 | orchestrator | ok 2025-04-17 00:00:32.801602 | 2025-04-17 00:00:32.801722 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-17 00:00:32.878012 | orchestrator | ok 2025-04-17 00:00:32.893951 | 2025-04-17 00:00:32.894060 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-17 00:00:33.369296 | orchestrator -> localhost | ok 2025-04-17 00:00:33.378212 | 2025-04-17 00:00:33.378305 | TASK [validate-host : Collect information about the host] 2025-04-17 00:00:34.828142 | orchestrator | ok 2025-04-17 00:00:34.843690 | 2025-04-17 00:00:34.843782 | TASK [validate-host : Sanitize hostname] 2025-04-17 00:00:34.902794 | orchestrator | ok 2025-04-17 00:00:34.917244 | 2025-04-17 00:00:34.917338 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-17 00:00:35.800964 | orchestrator -> localhost | changed 2025-04-17 00:00:35.807209 | 2025-04-17 00:00:35.807293 | TASK [validate-host : Collect information about zuul worker] 2025-04-17 00:00:36.486486 | orchestrator | ok 2025-04-17 00:00:36.495913 | 2025-04-17 00:00:36.496013 | TASK [validate-host : Write out all zuul information for each host] 2025-04-17 00:00:37.162922 | orchestrator -> localhost | changed 2025-04-17 00:00:37.177139 | 2025-04-17 00:00:37.177229 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-17 00:00:37.461885 | orchestrator | ok 2025-04-17 00:00:37.470646 | 2025-04-17 00:00:37.470739 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-17 00:01:00.323346 | orchestrator | changed: 2025-04-17 00:01:00.323563 | orchestrator | .d..t...... src/ 2025-04-17 00:01:00.323599 | orchestrator | .d..t...... src/github.com/ 2025-04-17 00:01:00.323623 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-17 00:01:00.323644 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-17 00:01:00.323664 | orchestrator | RedHat.yml 2025-04-17 00:01:00.338512 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-17 00:01:00.338529 | orchestrator | RedHat.yml 2025-04-17 00:01:00.338581 | orchestrator | = 2.2.0"... 2025-04-17 00:01:12.819678 | orchestrator | 00:01:12.819 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-04-17 00:01:12.899882 | orchestrator | 00:01:12.899 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-04-17 00:01:14.196738 | orchestrator | 00:01:14.196 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-17 00:01:15.217259 | orchestrator | 00:01:15.216 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-17 00:01:16.553709 | orchestrator | 00:01:16.553 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-17 00:01:17.546801 | orchestrator | 00:01:17.546 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-17 00:01:18.773970 | orchestrator | 00:01:18.773 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-17 00:01:20.090554 | orchestrator | 00:01:20.090 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-17 00:01:20.090625 | orchestrator | 00:01:20.090 STDOUT terraform: Providers are signed by their developers. 2025-04-17 00:01:20.090784 | orchestrator | 00:01:20.090 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-17 00:01:20.090869 | orchestrator | 00:01:20.090 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-17 00:01:20.091078 | orchestrator | 00:01:20.090 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-17 00:01:20.091254 | orchestrator | 00:01:20.091 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-17 00:01:20.091401 | orchestrator | 00:01:20.091 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-17 00:01:20.091471 | orchestrator | 00:01:20.091 STDOUT terraform: you run "tofu init" in the future. 2025-04-17 00:01:20.091597 | orchestrator | 00:01:20.091 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-17 00:01:20.091719 | orchestrator | 00:01:20.091 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-17 00:01:20.091903 | orchestrator | 00:01:20.091 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-17 00:01:20.091966 | orchestrator | 00:01:20.091 STDOUT terraform: should now work. 2025-04-17 00:01:20.092174 | orchestrator | 00:01:20.091 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-17 00:01:20.092334 | orchestrator | 00:01:20.092 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-17 00:01:20.092441 | orchestrator | 00:01:20.092 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-17 00:01:20.264886 | orchestrator | 00:01:20.264 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-17 00:01:20.441715 | orchestrator | 00:01:20.441 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-17 00:01:20.441800 | orchestrator | 00:01:20.441 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-17 00:01:20.441948 | orchestrator | 00:01:20.441 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-17 00:01:20.441968 | orchestrator | 00:01:20.441 STDOUT terraform: for this configuration. 2025-04-17 00:01:20.652938 | orchestrator | 00:01:20.652 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-17 00:01:20.746004 | orchestrator | 00:01:20.745 STDOUT terraform: ci.auto.tfvars 2025-04-17 00:01:20.938427 | orchestrator | 00:01:20.938 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-04-17 00:01:21.771015 | orchestrator | 00:01:21.770 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-17 00:01:22.319665 | orchestrator | 00:01:22.319 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-17 00:01:22.505477 | orchestrator | 00:01:22.505 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-17 00:01:22.505528 | orchestrator | 00:01:22.505 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-17 00:01:22.505537 | orchestrator | 00:01:22.505 STDOUT terraform:  + create 2025-04-17 00:01:22.505583 | orchestrator | 00:01:22.505 STDOUT terraform:  <= read (data resources) 2025-04-17 00:01:22.505591 | orchestrator | 00:01:22.505 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-17 00:01:22.505599 | orchestrator | 00:01:22.505 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-17 00:01:22.505626 | orchestrator | 00:01:22.505 STDOUT terraform:  # (config refers to values not yet known) 2025-04-17 00:01:22.505634 | orchestrator | 00:01:22.505 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-17 00:01:22.505657 | orchestrator | 00:01:22.505 STDOUT terraform:  + checksum = (known after apply) 2025-04-17 00:01:22.505687 | orchestrator | 00:01:22.505 STDOUT terraform:  + created_at = (known after apply) 2025-04-17 00:01:22.505716 | orchestrator | 00:01:22.505 STDOUT terraform:  + file = (known after apply) 2025-04-17 00:01:22.505746 | orchestrator | 00:01:22.505 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.505777 | orchestrator | 00:01:22.505 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.505805 | orchestrator | 00:01:22.505 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-17 00:01:22.505835 | orchestrator | 00:01:22.505 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-17 00:01:22.505856 | orchestrator | 00:01:22.505 STDOUT terraform:  + most_recent = true 2025-04-17 00:01:22.505881 | orchestrator | 00:01:22.505 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.505915 | orchestrator | 00:01:22.505 STDOUT terraform:  + protected = (known after apply) 2025-04-17 00:01:22.505938 | orchestrator | 00:01:22.505 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.505967 | orchestrator | 00:01:22.505 STDOUT terraform:  + schema = (known after apply) 2025-04-17 00:01:22.505995 | orchestrator | 00:01:22.505 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-17 00:01:22.506026 | orchestrator | 00:01:22.505 STDOUT terraform:  + tags = (known after apply) 2025-04-17 00:01:22.506082 | orchestrator | 00:01:22.506 STDOUT terraform:  + updated_at = (known after apply) 2025-04-17 00:01:22.506147 | orchestrator | 00:01:22.506 STDOUT terraform:  } 2025-04-17 00:01:22.506157 | orchestrator | 00:01:22.506 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-17 00:01:22.506164 | orchestrator | 00:01:22.506 STDOUT terraform:  # (config refers to values not yet known) 2025-04-17 00:01:22.506203 | orchestrator | 00:01:22.506 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-17 00:01:22.506233 | orchestrator | 00:01:22.506 STDOUT terraform:  + checksum = (known after apply) 2025-04-17 00:01:22.506260 | orchestrator | 00:01:22.506 STDOUT terraform:  + created_at = (known after apply) 2025-04-17 00:01:22.506290 | orchestrator | 00:01:22.506 STDOUT terraform:  + file = (known after apply) 2025-04-17 00:01:22.506320 | orchestrator | 00:01:22.506 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.506348 | orchestrator | 00:01:22.506 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.506377 | orchestrator | 00:01:22.506 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-17 00:01:22.506405 | orchestrator | 00:01:22.506 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-17 00:01:22.506423 | orchestrator | 00:01:22.506 STDOUT terraform:  + most_recent = true 2025-04-17 00:01:22.506464 | orchestrator | 00:01:22.506 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.506494 | orchestrator | 00:01:22.506 STDOUT terraform:  + protected = (known after apply) 2025-04-17 00:01:22.506523 | orchestrator | 00:01:22.506 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.506552 | orchestrator | 00:01:22.506 STDOUT terraform:  + schema = (known after apply) 2025-04-17 00:01:22.506581 | orchestrator | 00:01:22.506 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-17 00:01:22.506610 | orchestrator | 00:01:22.506 STDOUT terraform:  + tags = (known after apply) 2025-04-17 00:01:22.506642 | orchestrator | 00:01:22.506 STDOUT terraform:  + updated_at = (known after apply) 2025-04-17 00:01:22.506648 | orchestrator | 00:01:22.506 STDOUT terraform:  } 2025-04-17 00:01:22.506683 | orchestrator | 00:01:22.506 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-17 00:01:22.506714 | orchestrator | 00:01:22.506 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-17 00:01:22.506749 | orchestrator | 00:01:22.506 STDOUT terraform:  + content = (known after apply) 2025-04-17 00:01:22.506783 | orchestrator | 00:01:22.506 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-17 00:01:22.506817 | orchestrator | 00:01:22.506 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-17 00:01:22.506853 | orchestrator | 00:01:22.506 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-17 00:01:22.506889 | orchestrator | 00:01:22.506 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-17 00:01:22.506921 | orchestrator | 00:01:22.506 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-17 00:01:22.506957 | orchestrator | 00:01:22.506 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-17 00:01:22.506980 | orchestrator | 00:01:22.506 STDOUT terraform:  + directory_permission = "0777" 2025-04-17 00:01:22.507004 | orchestrator | 00:01:22.506 STDOUT terraform:  + file_permission = "0644" 2025-04-17 00:01:22.507039 | orchestrator | 00:01:22.506 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-17 00:01:22.507083 | orchestrator | 00:01:22.507 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.507091 | orchestrator | 00:01:22.507 STDOUT terraform:  } 2025-04-17 00:01:22.507120 | orchestrator | 00:01:22.507 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-17 00:01:22.507143 | orchestrator | 00:01:22.507 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-17 00:01:22.507179 | orchestrator | 00:01:22.507 STDOUT terraform:  + content = (known after apply) 2025-04-17 00:01:22.507213 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-17 00:01:22.507247 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-17 00:01:22.507281 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-17 00:01:22.507317 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-17 00:01:22.507351 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-17 00:01:22.507385 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-17 00:01:22.507408 | orchestrator | 00:01:22.507 STDOUT terraform:  + directory_permission = "0777" 2025-04-17 00:01:22.507432 | orchestrator | 00:01:22.507 STDOUT terraform:  + file_permission = "0644" 2025-04-17 00:01:22.507467 | orchestrator | 00:01:22.507 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-17 00:01:22.507505 | orchestrator | 00:01:22.507 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.507537 | orchestrator | 00:01:22.507 STDOUT terraform:  } 2025-04-17 00:01:22.507546 | orchestrator | 00:01:22.507 STDOUT terraform:  # local_file.inventory will be created 2025-04-17 00:01:22.507587 | orchestrator | 00:01:22.507 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-17 00:01:22.507594 | orchestrator | 00:01:22.507 STDOUT terraform:  + content = (known after apply) 2025-04-17 00:01:22.507622 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-17 00:01:22.507655 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-17 00:01:22.507691 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-17 00:01:22.507726 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-17 00:01:22.507761 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-17 00:01:22.507795 | orchestrator | 00:01:22.507 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-17 00:01:22.507818 | orchestrator | 00:01:22.507 STDOUT terraform:  + directory_permission = "0777" 2025-04-17 00:01:22.507840 | orchestrator | 00:01:22.507 STDOUT terraform:  + file_permission = "0644" 2025-04-17 00:01:22.507872 | orchestrator | 00:01:22.507 STDOUT terraform:  + filename = "inventory.ci" 2025-04-17 00:01:22.507906 | orchestrator | 00:01:22.507 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.507913 | orchestrator | 00:01:22.507 STDOUT terraform:  } 2025-04-17 00:01:22.507965 | orchestrator | 00:01:22.507 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-17 00:01:22.507995 | orchestrator | 00:01:22.507 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-17 00:01:22.508026 | orchestrator | 00:01:22.507 STDOUT terraform:  + content = (sensitive value) 2025-04-17 00:01:22.508060 | orchestrator | 00:01:22.508 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-17 00:01:22.508104 | orchestrator | 00:01:22.508 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-17 00:01:22.508137 | orchestrator | 00:01:22.508 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-17 00:01:22.508176 | orchestrator | 00:01:22.508 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-17 00:01:22.508207 | orchestrator | 00:01:22.508 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-17 00:01:22.508241 | orchestrator | 00:01:22.508 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-17 00:01:22.508264 | orchestrator | 00:01:22.508 STDOUT terraform:  + directory_permission = "0700" 2025-04-17 00:01:22.508288 | orchestrator | 00:01:22.508 STDOUT terraform:  + file_permission = "0600" 2025-04-17 00:01:22.508318 | orchestrator | 00:01:22.508 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-17 00:01:22.508355 | orchestrator | 00:01:22.508 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.508362 | orchestrator | 00:01:22.508 STDOUT terraform:  } 2025-04-17 00:01:22.508393 | orchestrator | 00:01:22.508 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-17 00:01:22.508422 | orchestrator | 00:01:22.508 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-17 00:01:22.508440 | orchestrator | 00:01:22.508 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.508447 | orchestrator | 00:01:22.508 STDOUT terraform:  } 2025-04-17 00:01:22.508498 | orchestrator | 00:01:22.508 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-17 00:01:22.508544 | orchestrator | 00:01:22.508 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-17 00:01:22.508575 | orchestrator | 00:01:22.508 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.508592 | orchestrator | 00:01:22.508 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.508623 | orchestrator | 00:01:22.508 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.508653 | orchestrator | 00:01:22.508 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.508684 | orchestrator | 00:01:22.508 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.508723 | orchestrator | 00:01:22.508 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-17 00:01:22.508753 | orchestrator | 00:01:22.508 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.508771 | orchestrator | 00:01:22.508 STDOUT terraform:  + size = 80 2025-04-17 00:01:22.508789 | orchestrator | 00:01:22.508 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.508796 | orchestrator | 00:01:22.508 STDOUT terraform:  } 2025-04-17 00:01:22.508845 | orchestrator | 00:01:22.508 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-17 00:01:22.508890 | orchestrator | 00:01:22.508 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-17 00:01:22.508921 | orchestrator | 00:01:22.508 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.508939 | orchestrator | 00:01:22.508 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.508967 | orchestrator | 00:01:22.508 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.508998 | orchestrator | 00:01:22.508 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.509027 | orchestrator | 00:01:22.508 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.509065 | orchestrator | 00:01:22.509 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-17 00:01:22.509188 | orchestrator | 00:01:22.509 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.509235 | orchestrator | 00:01:22.509 STDOUT terraform:  + size = 80 2025-04-17 00:01:22.509253 | orchestrator | 00:01:22.509 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.509267 | orchestrator | 00:01:22.509 STDOUT terraform:  } 2025-04-17 00:01:22.509285 | orchestrator | 00:01:22.509 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-17 00:01:22.509312 | orchestrator | 00:01:22.509 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-17 00:01:22.509326 | orchestrator | 00:01:22.509 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.509339 | orchestrator | 00:01:22.509 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.509355 | orchestrator | 00:01:22.509 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.509368 | orchestrator | 00:01:22.509 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.509383 | orchestrator | 00:01:22.509 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.509398 | orchestrator | 00:01:22.509 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-17 00:01:22.509452 | orchestrator | 00:01:22.509 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.509483 | orchestrator | 00:01:22.509 STDOUT terraform:  + size = 80 2025-04-17 00:01:22.509499 | orchestrator | 00:01:22.509 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.509513 | orchestrator | 00:01:22.509 STDOUT terraform:  } 2025-04-17 00:01:22.509529 | orchestrator | 00:01:22.509 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-17 00:01:22.509574 | orchestrator | 00:01:22.509 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-17 00:01:22.509591 | orchestrator | 00:01:22.509 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.509607 | orchestrator | 00:01:22.509 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.509623 | orchestrator | 00:01:22.509 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.509674 | orchestrator | 00:01:22.509 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.509691 | orchestrator | 00:01:22.509 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.509732 | orchestrator | 00:01:22.509 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-17 00:01:22.509748 | orchestrator | 00:01:22.509 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.509763 | orchestrator | 00:01:22.509 STDOUT terraform:  + size = 80 2025-04-17 00:01:22.509779 | orchestrator | 00:01:22.509 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.509794 | orchestrator | 00:01:22.509 STDOUT terraform:  } 2025-04-17 00:01:22.509851 | orchestrator | 00:01:22.509 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-17 00:01:22.509897 | orchestrator | 00:01:22.509 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-17 00:01:22.509914 | orchestrator | 00:01:22.509 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.509929 | orchestrator | 00:01:22.509 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.509945 | orchestrator | 00:01:22.509 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.509995 | orchestrator | 00:01:22.509 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.510036 | orchestrator | 00:01:22.509 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.510056 | orchestrator | 00:01:22.509 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-17 00:01:22.510099 | orchestrator | 00:01:22.510 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.510116 | orchestrator | 00:01:22.510 STDOUT terraform:  + size = 80 2025-04-17 00:01:22.510131 | orchestrator | 00:01:22.510 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.510147 | orchestrator | 00:01:22.510 STDOUT terraform:  } 2025-04-17 00:01:22.510199 | orchestrator | 00:01:22.510 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-17 00:01:22.510241 | orchestrator | 00:01:22.510 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-17 00:01:22.510268 | orchestrator | 00:01:22.510 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.510292 | orchestrator | 00:01:22.510 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.510308 | orchestrator | 00:01:22.510 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.510346 | orchestrator | 00:01:22.510 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.510373 | orchestrator | 00:01:22.510 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.510414 | orchestrator | 00:01:22.510 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-17 00:01:22.510448 | orchestrator | 00:01:22.510 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.510465 | orchestrator | 00:01:22.510 STDOUT terraform:  + size = 80 2025-04-17 00:01:22.510481 | orchestrator | 00:01:22.510 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.510531 | orchestrator | 00:01:22.510 STDOUT terraform:  } 2025-04-17 00:01:22.510548 | orchestrator | 00:01:22.510 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-17 00:01:22.510582 | orchestrator | 00:01:22.510 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-17 00:01:22.510599 | orchestrator | 00:01:22.510 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.510615 | orchestrator | 00:01:22.510 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.510656 | orchestrator | 00:01:22.510 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.510690 | orchestrator | 00:01:22.510 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.510706 | orchestrator | 00:01:22.510 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.510752 | orchestrator | 00:01:22.510 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-17 00:01:22.510779 | orchestrator | 00:01:22.510 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.510795 | orchestrator | 00:01:22.510 STDOUT terraform:  + size = 80 2025-04-17 00:01:22.510811 | orchestrator | 00:01:22.510 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.510827 | orchestrator | 00:01:22.510 STDOUT terraform:  } 2025-04-17 00:01:22.510869 | orchestrator | 00:01:22.510 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-17 00:01:22.510913 | orchestrator | 00:01:22.510 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.510940 | orchestrator | 00:01:22.510 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.510956 | orchestrator | 00:01:22.510 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.510993 | orchestrator | 00:01:22.510 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.511010 | orchestrator | 00:01:22.510 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.511055 | orchestrator | 00:01:22.511 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-17 00:01:22.511087 | orchestrator | 00:01:22.511 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.511104 | orchestrator | 00:01:22.511 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.511127 | orchestrator | 00:01:22.511 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.511171 | orchestrator | 00:01:22.511 STDOUT terraform:  } 2025-04-17 00:01:22.511189 | orchestrator | 00:01:22.511 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-17 00:01:22.511205 | orchestrator | 00:01:22.511 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.511245 | orchestrator | 00:01:22.511 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.511262 | orchestrator | 00:01:22.511 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.511277 | orchestrator | 00:01:22.511 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.511316 | orchestrator | 00:01:22.511 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.511352 | orchestrator | 00:01:22.511 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-17 00:01:22.511387 | orchestrator | 00:01:22.511 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.511403 | orchestrator | 00:01:22.511 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.511419 | orchestrator | 00:01:22.511 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.511469 | orchestrator | 00:01:22.511 STDOUT terraform:  } 2025-04-17 00:01:22.511486 | orchestrator | 00:01:22.511 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-17 00:01:22.511502 | orchestrator | 00:01:22.511 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.511540 | orchestrator | 00:01:22.511 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.511556 | orchestrator | 00:01:22.511 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.511590 | orchestrator | 00:01:22.511 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.511609 | orchestrator | 00:01:22.511 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.511652 | orchestrator | 00:01:22.511 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-17 00:01:22.511670 | orchestrator | 00:01:22.511 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.511704 | orchestrator | 00:01:22.511 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.511720 | orchestrator | 00:01:22.511 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.511771 | orchestrator | 00:01:22.511 STDOUT terraform:  } 2025-04-17 00:01:22.511802 | orchestrator | 00:01:22.511 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-17 00:01:22.511824 | orchestrator | 00:01:22.511 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.511850 | orchestrator | 00:01:22.511 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.511871 | orchestrator | 00:01:22.511 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.511896 | orchestrator | 00:01:22.511 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.511947 | orchestrator | 00:01:22.511 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.511991 | orchestrator | 00:01:22.511 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-17 00:01:22.512013 | orchestrator | 00:01:22.511 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.512034 | orchestrator | 00:01:22.511 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.512056 | orchestrator | 00:01:22.511 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.512103 | orchestrator | 00:01:22.511 STDOUT terraform:  } 2025-04-17 00:01:22.512132 | orchestrator | 00:01:22.511 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-17 00:01:22.512153 | orchestrator | 00:01:22.512 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.512179 | orchestrator | 00:01:22.512 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.512230 | orchestrator | 00:01:22.512 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.512257 | orchestrator | 00:01:22.512 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.512280 | orchestrator | 00:01:22.512 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.512309 | orchestrator | 00:01:22.512 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-17 00:01:22.512333 | orchestrator | 00:01:22.512 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.512356 | orchestrator | 00:01:22.512 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.512379 | orchestrator | 00:01:22.512 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.512401 | orchestrator | 00:01:22.512 STDOUT terraform:  } 2025-04-17 00:01:22.512423 | orchestrator | 00:01:22.512 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-17 00:01:22.512437 | orchestrator | 00:01:22.512 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.512449 | orchestrator | 00:01:22.512 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.512463 | orchestrator | 00:01:22.512 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.512479 | orchestrator | 00:01:22.512 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.512496 | orchestrator | 00:01:22.512 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.512512 | orchestrator | 00:01:22.512 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-17 00:01:22.512538 | orchestrator | 00:01:22.512 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.512567 | orchestrator | 00:01:22.512 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.512598 | orchestrator | 00:01:22.512 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.512657 | orchestrator | 00:01:22.512 STDOUT terraform:  } 2025-04-17 00:01:22.512683 | orchestrator | 00:01:22.512 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-17 00:01:22.512711 | orchestrator | 00:01:22.512 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.512744 | orchestrator | 00:01:22.512 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.512769 | orchestrator | 00:01:22.512 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.512782 | orchestrator | 00:01:22.512 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.512801 | orchestrator | 00:01:22.512 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.512824 | orchestrator | 00:01:22.512 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-17 00:01:22.512846 | orchestrator | 00:01:22.512 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.512872 | orchestrator | 00:01:22.512 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.512895 | orchestrator | 00:01:22.512 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.512917 | orchestrator | 00:01:22.512 STDOUT terraform:  } 2025-04-17 00:01:22.512948 | orchestrator | 00:01:22.512 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-17 00:01:22.512970 | orchestrator | 00:01:22.512 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.512989 | orchestrator | 00:01:22.512 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.513005 | orchestrator | 00:01:22.512 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.513019 | orchestrator | 00:01:22.512 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.513034 | orchestrator | 00:01:22.512 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.513050 | orchestrator | 00:01:22.513 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-17 00:01:22.513109 | orchestrator | 00:01:22.513 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.513136 | orchestrator | 00:01:22.513 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.513161 | orchestrator | 00:01:22.513 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.513190 | orchestrator | 00:01:22.513 STDOUT terraform:  } 2025-04-17 00:01:22.513216 | orchestrator | 00:01:22.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-17 00:01:22.513246 | orchestrator | 00:01:22.513 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.513271 | orchestrator | 00:01:22.513 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.513310 | orchestrator | 00:01:22.513 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.513332 | orchestrator | 00:01:22.513 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.513359 | orchestrator | 00:01:22.513 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.513384 | orchestrator | 00:01:22.513 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-17 00:01:22.513414 | orchestrator | 00:01:22.513 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.513438 | orchestrator | 00:01:22.513 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.513461 | orchestrator | 00:01:22.513 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.513490 | orchestrator | 00:01:22.513 STDOUT terraform:  } 2025-04-17 00:01:22.513530 | orchestrator | 00:01:22.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-17 00:01:22.513559 | orchestrator | 00:01:22.513 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.513584 | orchestrator | 00:01:22.513 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.513609 | orchestrator | 00:01:22.513 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.513640 | orchestrator | 00:01:22.513 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.513666 | orchestrator | 00:01:22.513 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.513691 | orchestrator | 00:01:22.513 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-17 00:01:22.513722 | orchestrator | 00:01:22.513 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.513747 | orchestrator | 00:01:22.513 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.513770 | orchestrator | 00:01:22.513 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.513791 | orchestrator | 00:01:22.513 STDOUT terraform:  } 2025-04-17 00:01:22.513821 | orchestrator | 00:01:22.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-17 00:01:22.513846 | orchestrator | 00:01:22.513 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.513869 | orchestrator | 00:01:22.513 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.513894 | orchestrator | 00:01:22.513 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.513923 | orchestrator | 00:01:22.513 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.513953 | orchestrator | 00:01:22.513 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.513978 | orchestrator | 00:01:22.513 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-17 00:01:22.514004 | orchestrator | 00:01:22.513 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.514134 | orchestrator | 00:01:22.513 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.514159 | orchestrator | 00:01:22.513 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.514173 | orchestrator | 00:01:22.513 STDOUT terraform:  } 2025-04-17 00:01:22.514192 | orchestrator | 00:01:22.513 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-17 00:01:22.514205 | orchestrator | 00:01:22.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.514218 | orchestrator | 00:01:22.514 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.514231 | orchestrator | 00:01:22.514 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.514243 | orchestrator | 00:01:22.514 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.514256 | orchestrator | 00:01:22.514 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.514272 | orchestrator | 00:01:22.514 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-17 00:01:22.514336 | orchestrator | 00:01:22.514 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.514350 | orchestrator | 00:01:22.514 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.514363 | orchestrator | 00:01:22.514 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.514377 | orchestrator | 00:01:22.514 STDOUT terraform:  } 2025-04-17 00:01:22.514396 | orchestrator | 00:01:22.514 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-17 00:01:22.514434 | orchestrator | 00:01:22.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.514457 | orchestrator | 00:01:22.514 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.514478 | orchestrator | 00:01:22.514 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.514504 | orchestrator | 00:01:22.514 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.514527 | orchestrator | 00:01:22.514 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.514548 | orchestrator | 00:01:22.514 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-17 00:01:22.514574 | orchestrator | 00:01:22.514 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.514595 | orchestrator | 00:01:22.514 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.514617 | orchestrator | 00:01:22.514 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.514639 | orchestrator | 00:01:22.514 STDOUT terraform:  } 2025-04-17 00:01:22.514665 | orchestrator | 00:01:22.514 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-17 00:01:22.514687 | orchestrator | 00:01:22.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.514709 | orchestrator | 00:01:22.514 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.514737 | orchestrator | 00:01:22.514 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.514760 | orchestrator | 00:01:22.514 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.514783 | orchestrator | 00:01:22.514 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.514810 | orchestrator | 00:01:22.514 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-17 00:01:22.514834 | orchestrator | 00:01:22.514 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.514858 | orchestrator | 00:01:22.514 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.514885 | orchestrator | 00:01:22.514 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.514927 | orchestrator | 00:01:22.514 STDOUT terraform:  } 2025-04-17 00:01:22.514956 | orchestrator | 00:01:22.514 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-17 00:01:22.514979 | orchestrator | 00:01:22.514 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.515003 | orchestrator | 00:01:22.514 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.515024 | orchestrator | 00:01:22.514 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.515062 | orchestrator | 00:01:22.514 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.515109 | orchestrator | 00:01:22.515 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.515136 | orchestrator | 00:01:22.515 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-17 00:01:22.515157 | orchestrator | 00:01:22.515 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.515181 | orchestrator | 00:01:22.515 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.515218 | orchestrator | 00:01:22.515 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.515240 | orchestrator | 00:01:22.515 STDOUT terraform:  } 2025-04-17 00:01:22.515266 | orchestrator | 00:01:22.515 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-17 00:01:22.515289 | orchestrator | 00:01:22.515 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.515315 | orchestrator | 00:01:22.515 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.515336 | orchestrator | 00:01:22.515 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.515364 | orchestrator | 00:01:22.515 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.515393 | orchestrator | 00:01:22.515 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.515421 | orchestrator | 00:01:22.515 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-17 00:01:22.515445 | orchestrator | 00:01:22.515 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.515468 | orchestrator | 00:01:22.515 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.515496 | orchestrator | 00:01:22.515 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.515540 | orchestrator | 00:01:22.515 STDOUT terraform:  } 2025-04-17 00:01:22.515565 | orchestrator | 00:01:22.515 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-17 00:01:22.515594 | orchestrator | 00:01:22.515 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.515617 | orchestrator | 00:01:22.515 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.515640 | orchestrator | 00:01:22.515 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.515669 | orchestrator | 00:01:22.515 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.515692 | orchestrator | 00:01:22.515 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.515716 | orchestrator | 00:01:22.515 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-17 00:01:22.515742 | orchestrator | 00:01:22.515 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.515812 | orchestrator | 00:01:22.515 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.515839 | orchestrator | 00:01:22.515 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.515861 | orchestrator | 00:01:22.515 STDOUT terraform:  } 2025-04-17 00:01:22.515889 | orchestrator | 00:01:22.515 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-17 00:01:22.515926 | orchestrator | 00:01:22.515 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-17 00:01:22.515947 | orchestrator | 00:01:22.515 STDOUT terraform:  + attachment = (known after apply) 2025-04-17 00:01:22.515970 | orchestrator | 00:01:22.515 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.515992 | orchestrator | 00:01:22.515 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.516019 | orchestrator | 00:01:22.515 STDOUT terraform:  + metadata = (known after apply) 2025-04-17 00:01:22.516098 | orchestrator | 00:01:22.515 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-17 00:01:22.516125 | orchestrator | 00:01:22.515 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.516147 | orchestrator | 00:01:22.515 STDOUT terraform:  + size = 20 2025-04-17 00:01:22.516168 | orchestrator | 00:01:22.515 STDOUT terraform:  + volume_type = "ssd" 2025-04-17 00:01:22.516190 | orchestrator | 00:01:22.516 STDOUT terraform:  } 2025-04-17 00:01:22.516222 | orchestrator | 00:01:22.516 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-17 00:01:22.516244 | orchestrator | 00:01:22.516 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-17 00:01:22.516266 | orchestrator | 00:01:22.516 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-17 00:01:22.516288 | orchestrator | 00:01:22.516 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-17 00:01:22.516308 | orchestrator | 00:01:22.516 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-17 00:01:22.516330 | orchestrator | 00:01:22.516 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.516355 | orchestrator | 00:01:22.516 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.516377 | orchestrator | 00:01:22.516 STDOUT terraform:  + config_drive = true 2025-04-17 00:01:22.516397 | orchestrator | 00:01:22.516 STDOUT terraform:  + created = (known after apply) 2025-04-17 00:01:22.516418 | orchestrator | 00:01:22.516 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-17 00:01:22.516439 | orchestrator | 00:01:22.516 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-17 00:01:22.516465 | orchestrator | 00:01:22.516 STDOUT terraform:  + force_delete = false 2025-04-17 00:01:22.516486 | orchestrator | 00:01:22.516 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.516508 | orchestrator | 00:01:22.516 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.516529 | orchestrator | 00:01:22.516 STDOUT terraform:  + image_name = (known after apply) 2025-04-17 00:01:22.516557 | orchestrator | 00:01:22.516 STDOUT terraform:  + key_pair = "testbed" 2025-04-17 00:01:22.516579 | orchestrator | 00:01:22.516 STDOUT terraform:  + name = "testbed-manager" 2025-04-17 00:01:22.516600 | orchestrator | 00:01:22.516 STDOUT terraform:  + power_state = "active" 2025-04-17 00:01:22.516622 | orchestrator | 00:01:22.516 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.516649 | orchestrator | 00:01:22.516 STDOUT terraform:  + security_groups = (known after apply) 2025-04-17 00:01:22.516687 | orchestrator | 00:01:22.516 STDOUT terraform:  + stop_before_destroy = false 2025-04-17 00:01:22.516709 | orchestrator | 00:01:22.516 STDOUT terraform:  + updated = (known after apply) 2025-04-17 00:01:22.516742 | orchestrator | 00:01:22.516 STDOUT terraform:  + user_data = (known after apply) 2025-04-17 00:01:22.516765 | orchestrator | 00:01:22.516 STDOUT terraform:  + block_device { 2025-04-17 00:01:22.516790 | orchestrator | 00:01:22.516 STDOUT terraform:  + boot_index = 0 2025-04-17 00:01:22.516812 | orchestrator | 00:01:22.516 STDOUT terraform:  + delete_on_termination = false 2025-04-17 00:01:22.516838 | orchestrator | 00:01:22.516 STDOUT terraform:  + destination_type = "volume" 2025-04-17 00:01:22.516859 | orchestrator | 00:01:22.516 STDOUT terraform:  + multiattach = false 2025-04-17 00:01:22.516880 | orchestrator | 00:01:22.516 STDOUT terraform:  + source_type = "volume" 2025-04-17 00:01:22.516902 | orchestrator | 00:01:22.516 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.516933 | orchestrator | 00:01:22.516 STDOUT terraform:  } 2025-04-17 00:01:22.516962 | orchestrator | 00:01:22.516 STDOUT terraform:  + network { 2025-04-17 00:01:22.516984 | orchestrator | 00:01:22.516 STDOUT terraform:  + access_network = false 2025-04-17 00:01:22.517005 | orchestrator | 00:01:22.516 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-17 00:01:22.517026 | orchestrator | 00:01:22.516 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-17 00:01:22.517048 | orchestrator | 00:01:22.516 STDOUT terraform:  + mac = (known after apply) 2025-04-17 00:01:22.517135 | orchestrator | 00:01:22.516 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.517183 | orchestrator | 00:01:22.516 STDOUT terraform:  + port = (known after apply) 2025-04-17 00:01:22.517207 | orchestrator | 00:01:22.516 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.517228 | orchestrator | 00:01:22.517 STDOUT terraform:  } 2025-04-17 00:01:22.517250 | orchestrator | 00:01:22.517 STDOUT terraform:  } 2025-04-17 00:01:22.517271 | orchestrator | 00:01:22.517 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-17 00:01:22.517293 | orchestrator | 00:01:22.517 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-17 00:01:22.517320 | orchestrator | 00:01:22.517 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-17 00:01:22.517342 | orchestrator | 00:01:22.517 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-17 00:01:22.517364 | orchestrator | 00:01:22.517 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-17 00:01:22.517385 | orchestrator | 00:01:22.517 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.517409 | orchestrator | 00:01:22.517 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.517432 | orchestrator | 00:01:22.517 STDOUT terraform:  + config_drive = true 2025-04-17 00:01:22.517460 | orchestrator | 00:01:22.517 STDOUT terraform:  + created = (known after apply) 2025-04-17 00:01:22.517502 | orchestrator | 00:01:22.517 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-17 00:01:22.517525 | orchestrator | 00:01:22.517 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-17 00:01:22.517545 | orchestrator | 00:01:22.517 STDOUT terraform:  + force_delete = false 2025-04-17 00:01:22.517564 | orchestrator | 00:01:22.517 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.517581 | orchestrator | 00:01:22.517 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.517602 | orchestrator | 00:01:22.517 STDOUT terraform:  + image_name = (known after apply) 2025-04-17 00:01:22.517621 | orchestrator | 00:01:22.517 STDOUT terraform:  + key_pair = "testbed" 2025-04-17 00:01:22.517639 | orchestrator | 00:01:22.517 STDOUT terraform:  + name = "testbed-node-0" 2025-04-17 00:01:22.517657 | orchestrator | 00:01:22.517 STDOUT terraform:  + power_state = "active" 2025-04-17 00:01:22.517674 | orchestrator | 00:01:22.517 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.517697 | orchestrator | 00:01:22.517 STDOUT terraform:  + security_groups = (known after apply) 2025-04-17 00:01:22.517718 | orchestrator | 00:01:22.517 STDOUT terraform:  + stop_before_destroy = false 2025-04-17 00:01:22.517737 | orchestrator | 00:01:22.517 STDOUT terraform:  + updated = (known after apply) 2025-04-17 00:01:22.517763 | orchestrator | 00:01:22.517 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-17 00:01:22.517784 | orchestrator | 00:01:22.517 STDOUT terraform:  + block_device { 2025-04-17 00:01:22.517804 | orchestrator | 00:01:22.517 STDOUT terraform:  + boot_index = 0 2025-04-17 00:01:22.517838 | orchestrator | 00:01:22.517 STDOUT terraform:  + delete_on_termination = false 2025-04-17 00:01:22.517858 | orchestrator | 00:01:22.517 STDOUT terraform:  + destination_type = "volume" 2025-04-17 00:01:22.517880 | orchestrator | 00:01:22.517 STDOUT terraform:  + multiattach = false 2025-04-17 00:01:22.517906 | orchestrator | 00:01:22.517 STDOUT terraform:  + source_type = "volume" 2025-04-17 00:01:22.517926 | orchestrator | 00:01:22.517 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.517946 | orchestrator | 00:01:22.517 STDOUT terraform:  } 2025-04-17 00:01:22.517966 | orchestrator | 00:01:22.517 STDOUT terraform:  + network { 2025-04-17 00:01:22.517991 | orchestrator | 00:01:22.517 STDOUT terraform:  + access_network = false 2025-04-17 00:01:22.518010 | orchestrator | 00:01:22.517 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-17 00:01:22.518065 | orchestrator | 00:01:22.517 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-17 00:01:22.518114 | orchestrator | 00:01:22.517 STDOUT terraform:  + mac = (known after apply) 2025-04-17 00:01:22.518135 | orchestrator | 00:01:22.517 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.518162 | orchestrator | 00:01:22.518 STDOUT terraform:  + port = (known after apply) 2025-04-17 00:01:22.518182 | orchestrator | 00:01:22.518 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.518218 | orchestrator | 00:01:22.518 STDOUT terraform:  } 2025-04-17 00:01:22.518239 | orchestrator | 00:01:22.518 STDOUT terraform:  } 2025-04-17 00:01:22.518259 | orchestrator | 00:01:22.518 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-17 00:01:22.518276 | orchestrator | 00:01:22.518 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-17 00:01:22.518300 | orchestrator | 00:01:22.518 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-17 00:01:22.518359 | orchestrator | 00:01:22.518 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-17 00:01:22.518380 | orchestrator | 00:01:22.518 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-17 00:01:22.518402 | orchestrator | 00:01:22.518 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.518450 | orchestrator | 00:01:22.518 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.518470 | orchestrator | 00:01:22.518 STDOUT terraform:  + config_drive = true 2025-04-17 00:01:22.518487 | orchestrator | 00:01:22.518 STDOUT terraform:  + created = (known after apply) 2025-04-17 00:01:22.518507 | orchestrator | 00:01:22.518 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-17 00:01:22.518560 | orchestrator | 00:01:22.518 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-17 00:01:22.518580 | orchestrator | 00:01:22.518 STDOUT terraform:  + force_delete = false 2025-04-17 00:01:22.518597 | orchestrator | 00:01:22.518 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.518620 | orchestrator | 00:01:22.518 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.518637 | orchestrator | 00:01:22.518 STDOUT terraform:  + image_name = (known after apply) 2025-04-17 00:01:22.518655 | orchestrator | 00:01:22.518 STDOUT terraform:  + key_pair = "testbed" 2025-04-17 00:01:22.518674 | orchestrator | 00:01:22.518 STDOUT terraform:  + name = "testbed-node-1" 2025-04-17 00:01:22.518696 | orchestrator | 00:01:22.518 STDOUT terraform:  + power_state = "active" 2025-04-17 00:01:22.518714 | orchestrator | 00:01:22.518 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.518732 | orchestrator | 00:01:22.518 STDOUT terraform:  + security_groups = (known after apply) 2025-04-17 00:01:22.518754 | orchestrator | 00:01:22.518 STDOUT terraform:  + stop_before_destroy = false 2025-04-17 00:01:22.518827 | orchestrator | 00:01:22.518 STDOUT terraform:  + updated = (known after apply) 2025-04-17 00:01:22.518851 | orchestrator | 00:01:22.518 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-17 00:01:22.518870 | orchestrator | 00:01:22.518 STDOUT terraform:  + block_device { 2025-04-17 00:01:22.518890 | orchestrator | 00:01:22.518 STDOUT terraform:  + boot_index = 0 2025-04-17 00:01:22.518914 | orchestrator | 00:01:22.518 STDOUT terraform:  + delete_on_termination = false 2025-04-17 00:01:22.518933 | orchestrator | 00:01:22.518 STDOUT terraform:  + destination_type = "volume" 2025-04-17 00:01:22.518951 | orchestrator | 00:01:22.518 STDOUT terraform:  + multiattach = false 2025-04-17 00:01:22.518986 | orchestrator | 00:01:22.518 STDOUT terraform:  + source_type = "volume" 2025-04-17 00:01:22.519008 | orchestrator | 00:01:22.518 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.519026 | orchestrator | 00:01:22.518 STDOUT terraform:  } 2025-04-17 00:01:22.519044 | orchestrator | 00:01:22.518 STDOUT terraform:  + network { 2025-04-17 00:01:22.519087 | orchestrator | 00:01:22.518 STDOUT terraform:  + access_network = false 2025-04-17 00:01:22.519107 | orchestrator | 00:01:22.518 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-17 00:01:22.519125 | orchestrator | 00:01:22.519 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-17 00:01:22.519146 | orchestrator | 00:01:22.519 STDOUT terraform:  + mac = (known after apply) 2025-04-17 00:01:22.519181 | orchestrator | 00:01:22.519 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.519202 | orchestrator | 00:01:22.519 STDOUT terraform:  + port = (known after apply) 2025-04-17 00:01:22.519224 | orchestrator | 00:01:22.519 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.519242 | orchestrator | 00:01:22.519 STDOUT terraform:  } 2025-04-17 00:01:22.519259 | orchestrator | 00:01:22.519 STDOUT terraform:  } 2025-04-17 00:01:22.519281 | orchestrator | 00:01:22.519 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-17 00:01:22.519299 | orchestrator | 00:01:22.519 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-17 00:01:22.519318 | orchestrator | 00:01:22.519 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-17 00:01:22.519341 | orchestrator | 00:01:22.519 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-17 00:01:22.519370 | orchestrator | 00:01:22.519 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-17 00:01:22.519392 | orchestrator | 00:01:22.519 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.519414 | orchestrator | 00:01:22.519 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.519435 | orchestrator | 00:01:22.519 STDOUT terraform:  + config_drive = true 2025-04-17 00:01:22.519458 | orchestrator | 00:01:22.519 STDOUT terraform:  + created = (known after apply) 2025-04-17 00:01:22.519508 | orchestrator | 00:01:22.519 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-17 00:01:22.519530 | orchestrator | 00:01:22.519 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-17 00:01:22.519579 | orchestrator | 00:01:22.519 STDOUT terraform:  + force_delete = false 2025-04-17 00:01:22.519605 | orchestrator | 00:01:22.519 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.519625 | orchestrator | 00:01:22.519 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.519647 | orchestrator | 00:01:22.519 STDOUT terraform:  + image_name = (known after apply) 2025-04-17 00:01:22.519710 | orchestrator | 00:01:22.519 STDOUT terraform:  + key_pair = "testbed" 2025-04-17 00:01:22.519737 | orchestrator | 00:01:22.519 STDOUT terraform:  + name = "testbed-node-2" 2025-04-17 00:01:22.519769 | orchestrator | 00:01:22.519 STDOUT terraform:  + power_state = "active" 2025-04-17 00:01:22.519787 | orchestrator | 00:01:22.519 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.519810 | orchestrator | 00:01:22.519 STDOUT terraform:  + security_groups = (known after apply) 2025-04-17 00:01:22.519878 | orchestrator | 00:01:22.519 STDOUT terraform:  + stop_before_destroy = false 2025-04-17 00:01:22.519899 | orchestrator | 00:01:22.519 STDOUT terraform:  + updated = (known after apply) 2025-04-17 00:01:22.519921 | orchestrator | 00:01:22.519 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-17 00:01:22.519941 | orchestrator | 00:01:22.519 STDOUT terraform:  + block_device { 2025-04-17 00:01:22.519961 | orchestrator | 00:01:22.519 STDOUT terraform:  + boot_index = 0 2025-04-17 00:01:22.519981 | orchestrator | 00:01:22.519 STDOUT terraform:  + delete_on_termination = false 2025-04-17 00:01:22.520004 | orchestrator | 00:01:22.519 STDOUT terraform:  + destination_type = "volume" 2025-04-17 00:01:22.520024 | orchestrator | 00:01:22.519 STDOUT terraform:  + multiattach = false 2025-04-17 00:01:22.520041 | orchestrator | 00:01:22.519 STDOUT terraform:  + source_type = "volume" 2025-04-17 00:01:22.520064 | orchestrator | 00:01:22.519 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.520136 | orchestrator | 00:01:22.520 STDOUT terraform:  } 2025-04-17 00:01:22.520154 | orchestrator | 00:01:22.520 STDOUT terraform:  + network { 2025-04-17 00:01:22.520171 | orchestrator | 00:01:22.520 STDOUT terraform:  + access_network = false 2025-04-17 00:01:22.520193 | orchestrator | 00:01:22.520 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-17 00:01:22.520212 | orchestrator | 00:01:22.520 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-17 00:01:22.520230 | orchestrator | 00:01:22.520 STDOUT terraform:  + mac = (known after apply) 2025-04-17 00:01:22.520247 | orchestrator | 00:01:22.520 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.520267 | orchestrator | 00:01:22.520 STDOUT terraform:  + port = (known after apply) 2025-04-17 00:01:22.520336 | orchestrator | 00:01:22.520 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.520361 | orchestrator | 00:01:22.520 STDOUT terraform:  } 2025-04-17 00:01:22.520381 | orchestrator | 00:01:22.520 STDOUT terraform:  } 2025-04-17 00:01:22.520406 | orchestrator | 00:01:22.520 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-17 00:01:22.520458 | orchestrator | 00:01:22.520 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-17 00:01:22.520478 | orchestrator | 00:01:22.520 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-17 00:01:22.520496 | orchestrator | 00:01:22.520 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-17 00:01:22.520518 | orchestrator | 00:01:22.520 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-17 00:01:22.520535 | orchestrator | 00:01:22.520 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.520573 | orchestrator | 00:01:22.520 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.520591 | orchestrator | 00:01:22.520 STDOUT terraform:  + config_drive = true 2025-04-17 00:01:22.520614 | orchestrator | 00:01:22.520 STDOUT terraform:  + created = (known after apply) 2025-04-17 00:01:22.520662 | orchestrator | 00:01:22.520 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-17 00:01:22.520684 | orchestrator | 00:01:22.520 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-17 00:01:22.520700 | orchestrator | 00:01:22.520 STDOUT terraform:  + force_delete = false 2025-04-17 00:01:22.520719 | orchestrator | 00:01:22.520 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.520769 | orchestrator | 00:01:22.520 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.520786 | orchestrator | 00:01:22.520 STDOUT terraform:  + image_name = (known after apply) 2025-04-17 00:01:22.520801 | orchestrator | 00:01:22.520 STDOUT terraform:  + key_pair = "testbed" 2025-04-17 00:01:22.520820 | orchestrator | 00:01:22.520 STDOUT terraform:  + name = "testbed-node-3" 2025-04-17 00:01:22.520834 | orchestrator | 00:01:22.520 STDOUT terraform:  + power_state = "active" 2025-04-17 00:01:22.520849 | orchestrator | 00:01:22.520 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.520866 | orchestrator | 00:01:22.520 STDOUT terraform:  + security_groups = (known after apply) 2025-04-17 00:01:22.520881 | orchestrator | 00:01:22.520 STDOUT terraform:  + stop_before_destroy = false 2025-04-17 00:01:22.520899 | orchestrator | 00:01:22.520 STDOUT terraform:  + updated = (known after apply) 2025-04-17 00:01:22.520945 | orchestrator | 00:01:22.520 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-17 00:01:22.520961 | orchestrator | 00:01:22.520 STDOUT terraform:  + block_device { 2025-04-17 00:01:22.520979 | orchestrator | 00:01:22.520 STDOUT terraform:  + boot_index = 0 2025-04-17 00:01:22.521020 | orchestrator | 00:01:22.520 STDOUT terraform:  + delete_on_termination = false 2025-04-17 00:01:22.521039 | orchestrator | 00:01:22.520 STDOUT terraform:  + destination_type = "volume" 2025-04-17 00:01:22.521053 | orchestrator | 00:01:22.521 STDOUT terraform:  + multiattach = false 2025-04-17 00:01:22.521090 | orchestrator | 00:01:22.521 STDOUT terraform:  + source_type = "volume" 2025-04-17 00:01:22.521109 | orchestrator | 00:01:22.521 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.521125 | orchestrator | 00:01:22.521 STDOUT terraform:  } 2025-04-17 00:01:22.521144 | orchestrator | 00:01:22.521 STDOUT terraform:  + network { 2025-04-17 00:01:22.521159 | orchestrator | 00:01:22.521 STDOUT terraform:  + access_network = false 2025-04-17 00:01:22.521177 | orchestrator | 00:01:22.521 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-17 00:01:22.521195 | orchestrator | 00:01:22.521 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-17 00:01:22.521213 | orchestrator | 00:01:22.521 STDOUT terraform:  + mac = (known after apply) 2025-04-17 00:01:22.521241 | orchestrator | 00:01:22.521 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.521292 | orchestrator | 00:01:22.521 STDOUT terraform:  + port = (known after apply) 2025-04-17 00:01:22.521309 | orchestrator | 00:01:22.521 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.521326 | orchestrator | 00:01:22.521 STDOUT terraform:  } 2025-04-17 00:01:22.521368 | orchestrator | 00:01:22.521 STDOUT terraform:  } 2025-04-17 00:01:22.521388 | orchestrator | 00:01:22.521 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-17 00:01:22.521441 | orchestrator | 00:01:22.521 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-17 00:01:22.521461 | orchestrator | 00:01:22.521 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-17 00:01:22.521503 | orchestrator | 00:01:22.521 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-17 00:01:22.521523 | orchestrator | 00:01:22.521 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-17 00:01:22.521538 | orchestrator | 00:01:22.521 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.521555 | orchestrator | 00:01:22.521 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.521595 | orchestrator | 00:01:22.521 STDOUT terraform:  + config_drive = true 2025-04-17 00:01:22.521616 | orchestrator | 00:01:22.521 STDOUT terraform:  + created = (known after apply) 2025-04-17 00:01:22.521645 | orchestrator | 00:01:22.521 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-17 00:01:22.521664 | orchestrator | 00:01:22.521 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-17 00:01:22.521705 | orchestrator | 00:01:22.521 STDOUT terraform:  + force_delete = false 2025-04-17 00:01:22.521726 | orchestrator | 00:01:22.521 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.521768 | orchestrator | 00:01:22.521 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.521790 | orchestrator | 00:01:22.521 STDOUT terraform:  + image_name = (known after apply) 2025-04-17 00:01:22.521805 | orchestrator | 00:01:22.521 STDOUT terraform:  + key_pair = "testbed" 2025-04-17 00:01:22.521825 | orchestrator | 00:01:22.521 STDOUT terraform:  + name = "testbed-node-4" 2025-04-17 00:01:22.521869 | orchestrator | 00:01:22.521 STDOUT terraform:  + power_state = "active" 2025-04-17 00:01:22.521892 | orchestrator | 00:01:22.521 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.521910 | orchestrator | 00:01:22.521 STDOUT terraform:  + security_groups = (known after apply) 2025-04-17 00:01:22.521929 | orchestrator | 00:01:22.521 STDOUT terraform:  + stop_before_destroy = false 2025-04-17 00:01:22.521951 | orchestrator | 00:01:22.521 STDOUT terraform:  + updated = (known after apply) 2025-04-17 00:01:22.521995 | orchestrator | 00:01:22.521 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-17 00:01:22.522034 | orchestrator | 00:01:22.521 STDOUT terraform:  + block_device { 2025-04-17 00:01:22.522059 | orchestrator | 00:01:22.521 STDOUT terraform:  + boot_index = 0 2025-04-17 00:01:22.522105 | orchestrator | 00:01:22.522 STDOUT terraform:  + delete_on_termination = false 2025-04-17 00:01:22.522127 | orchestrator | 00:01:22.522 STDOUT terraform:  + destination_type = "volume" 2025-04-17 00:01:22.522144 | orchestrator | 00:01:22.522 STDOUT terraform:  + multiattach = false 2025-04-17 00:01:22.522164 | orchestrator | 00:01:22.522 STDOUT terraform:  + source_type = "volume" 2025-04-17 00:01:22.522185 | orchestrator | 00:01:22.522 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.522202 | orchestrator | 00:01:22.522 STDOUT terraform:  } 2025-04-17 00:01:22.522222 | orchestrator | 00:01:22.522 STDOUT terraform:  + network { 2025-04-17 00:01:22.522238 | orchestrator | 00:01:22.522 STDOUT terraform:  + access_network = false 2025-04-17 00:01:22.522266 | orchestrator | 00:01:22.522 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-17 00:01:22.522285 | orchestrator | 00:01:22.522 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-17 00:01:22.522305 | orchestrator | 00:01:22.522 STDOUT terraform:  + mac = (known after apply) 2025-04-17 00:01:22.522325 | orchestrator | 00:01:22.522 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.522345 | orchestrator | 00:01:22.522 STDOUT terraform:  + port = (known after apply) 2025-04-17 00:01:22.522397 | orchestrator | 00:01:22.522 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.522450 | orchestrator | 00:01:22.522 STDOUT terraform:  } 2025-04-17 00:01:22.522466 | orchestrator | 00:01:22.522 STDOUT terraform:  } 2025-04-17 00:01:22.522485 | orchestrator | 00:01:22.522 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-17 00:01:22.522501 | orchestrator | 00:01:22.522 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-17 00:01:22.522521 | orchestrator | 00:01:22.522 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-17 00:01:22.522540 | orchestrator | 00:01:22.522 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-17 00:01:22.522594 | orchestrator | 00:01:22.522 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-17 00:01:22.522644 | orchestrator | 00:01:22.522 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.522663 | orchestrator | 00:01:22.522 STDOUT terraform:  + availability_zone = "nova" 2025-04-17 00:01:22.522719 | orchestrator | 00:01:22.522 STDOUT terraform:  + config_drive = true 2025-04-17 00:01:22.522736 | orchestrator | 00:01:22.522 STDOUT terraform:  + created = (known after apply) 2025-04-17 00:01:22.522756 | orchestrator | 00:01:22.522 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-17 00:01:22.522793 | orchestrator | 00:01:22.522 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-17 00:01:22.522811 | orchestrator | 00:01:22.522 STDOUT terraform:  + force_delete = false 2025-04-17 00:01:22.522830 | orchestrator | 00:01:22.522 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.522847 | orchestrator | 00:01:22.522 STDOUT terraform:  + image_id = (known after apply) 2025-04-17 00:01:22.522876 | orchestrator | 00:01:22.522 STDOUT terraform:  + image_name = (known after apply) 2025-04-17 00:01:22.522893 | orchestrator | 00:01:22.522 STDOUT terraform:  + key_pair = "testbed" 2025-04-17 00:01:22.522913 | orchestrator | 00:01:22.522 STDOUT terraform:  + name = "testbed-node-5" 2025-04-17 00:01:22.522959 | orchestrator | 00:01:22.522 STDOUT terraform:  + power_state = "active" 2025-04-17 00:01:22.522978 | orchestrator | 00:01:22.522 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.522994 | orchestrator | 00:01:22.522 STDOUT terraform:  + security_groups = (known after apply) 2025-04-17 00:01:22.523012 | orchestrator | 00:01:22.522 STDOUT terraform:  + stop_before_destroy = false 2025-04-17 00:01:22.523031 | orchestrator | 00:01:22.522 STDOUT terraform:  + updated = (known after apply) 2025-04-17 00:01:22.523141 | orchestrator | 00:01:22.523 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-17 00:01:22.523163 | orchestrator | 00:01:22.523 STDOUT terraform:  + block_device { 2025-04-17 00:01:22.523180 | orchestrator | 00:01:22.523 STDOUT terraform:  + boot_index = 0 2025-04-17 00:01:22.523198 | orchestrator | 00:01:22.523 STDOUT terraform:  + delete_on_termination = false 2025-04-17 00:01:22.523216 | orchestrator | 00:01:22.523 STDOUT terraform:  + destination_type = "volume" 2025-04-17 00:01:22.523256 | orchestrator | 00:01:22.523 STDOUT terraform:  + multiattach = false 2025-04-17 00:01:22.523271 | orchestrator | 00:01:22.523 STDOUT terraform:  + source_type = "volume" 2025-04-17 00:01:22.523289 | orchestrator | 00:01:22.523 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.523304 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-04-17 00:01:22.523318 | orchestrator | 00:01:22.523 STDOUT terraform:  + network { 2025-04-17 00:01:22.523333 | orchestrator | 00:01:22.523 STDOUT terraform:  + access_network = false 2025-04-17 00:01:22.523351 | orchestrator | 00:01:22.523 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-17 00:01:22.523399 | orchestrator | 00:01:22.523 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-17 00:01:22.523416 | orchestrator | 00:01:22.523 STDOUT terraform:  + mac = (known after apply) 2025-04-17 00:01:22.523436 | orchestrator | 00:01:22.523 STDOUT terraform:  + name = (known after apply) 2025-04-17 00:01:22.523452 | orchestrator | 00:01:22.523 STDOUT terraform:  + port = (known after apply) 2025-04-17 00:01:22.523468 | orchestrator | 00:01:22.523 STDOUT terraform:  + uuid = (known after apply) 2025-04-17 00:01:22.523487 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-04-17 00:01:22.523538 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-04-17 00:01:22.523560 | orchestrator | 00:01:22.523 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-17 00:01:22.523579 | orchestrator | 00:01:22.523 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-17 00:01:22.523595 | orchestrator | 00:01:22.523 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-17 00:01:22.523610 | orchestrator | 00:01:22.523 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.523637 | orchestrator | 00:01:22.523 STDOUT terraform:  + name = "testbed" 2025-04-17 00:01:22.523653 | orchestrator | 00:01:22.523 STDOUT terraform:  + private_key = (sensitive value) 2025-04-17 00:01:22.523669 | orchestrator | 00:01:22.523 STDOUT terraform:  + public_key = (known after apply) 2025-04-17 00:01:22.523688 | orchestrator | 00:01:22.523 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.523752 | orchestrator | 00:01:22.523 STDOUT terraform:  + user_id = (known after apply) 2025-04-17 00:01:22.523768 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-04-17 00:01:22.523787 | orchestrator | 00:01:22.523 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-17 00:01:22.523803 | orchestrator | 00:01:22.523 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.523821 | orchestrator | 00:01:22.523 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.523839 | orchestrator | 00:01:22.523 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.523855 | orchestrator | 00:01:22.523 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.523899 | orchestrator | 00:01:22.523 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.523916 | orchestrator | 00:01:22.523 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.523934 | orchestrator | 00:01:22.523 STDOUT terraform:  } 2025-04-17 00:01:22.523987 | orchestrator | 00:01:22.523 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-17 00:01:22.524008 | orchestrator | 00:01:22.523 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.524028 | orchestrator | 00:01:22.523 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.524085 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.524112 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.524131 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.524151 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.524170 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-04-17 00:01:22.524189 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-17 00:01:22.524254 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.524273 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.524291 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.524308 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.524352 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.524368 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.524394 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-04-17 00:01:22.524412 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-17 00:01:22.524479 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.524500 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.524521 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.524541 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.524561 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.524592 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.524613 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-04-17 00:01:22.524657 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-17 00:01:22.524709 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.524731 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.524751 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.524771 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.524791 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.524811 | orchestrator | 00:01:22.524 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.524831 | orchestrator | 00:01:22.524 STDOUT terraform:  } 2025-04-17 00:01:22.524888 | orchestrator | 00:01:22.524 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-17 00:01:22.524920 | orchestrator | 00:01:22.524 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.524939 | orchestrator | 00:01:22.524 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.524982 | orchestrator | 00:01:22.524 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.525001 | orchestrator | 00:01:22.524 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.525019 | orchestrator | 00:01:22.524 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.525037 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.525055 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-04-17 00:01:22.525116 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-17 00:01:22.525178 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.525196 | orchestrator | 00:01:22.525 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.525215 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.525234 | orchestrator | 00:01:22.525 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.525260 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.525281 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.525344 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-04-17 00:01:22.525365 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-17 00:01:22.525385 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.525405 | orchestrator | 00:01:22.525 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.525424 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.525454 | orchestrator | 00:01:22.525 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.525477 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.525497 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.525517 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-04-17 00:01:22.525571 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-17 00:01:22.525615 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.525634 | orchestrator | 00:01:22.525 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.525652 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.525670 | orchestrator | 00:01:22.525 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.525712 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.525731 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.525791 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-04-17 00:01:22.525811 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-17 00:01:22.525831 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.525849 | orchestrator | 00:01:22.525 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.525869 | orchestrator | 00:01:22.525 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.525912 | orchestrator | 00:01:22.525 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.525953 | orchestrator | 00:01:22.525 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.525972 | orchestrator | 00:01:22.525 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.526032 | orchestrator | 00:01:22.525 STDOUT terraform:  } 2025-04-17 00:01:22.526055 | orchestrator | 00:01:22.525 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-17 00:01:22.526086 | orchestrator | 00:01:22.525 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.526113 | orchestrator | 00:01:22.526 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.526129 | orchestrator | 00:01:22.526 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.526146 | orchestrator | 00:01:22.526 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.526162 | orchestrator | 00:01:22.526 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.526190 | orchestrator | 00:01:22.526 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.526207 | orchestrator | 00:01:22.526 STDOUT terraform:  } 2025-04-17 00:01:22.526266 | orchestrator | 00:01:22.526 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-17 00:01:22.526322 | orchestrator | 00:01:22.526 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.526352 | orchestrator | 00:01:22.526 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.526372 | orchestrator | 00:01:22.526 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.526414 | orchestrator | 00:01:22.526 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.526436 | orchestrator | 00:01:22.526 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.526495 | orchestrator | 00:01:22.526 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.526515 | orchestrator | 00:01:22.526 STDOUT terraform:  } 2025-04-17 00:01:22.526536 | orchestrator | 00:01:22.526 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-17 00:01:22.526553 | orchestrator | 00:01:22.526 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.526572 | orchestrator | 00:01:22.526 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.526614 | orchestrator | 00:01:22.526 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.526633 | orchestrator | 00:01:22.526 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.526647 | orchestrator | 00:01:22.526 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.526666 | orchestrator | 00:01:22.526 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.526718 | orchestrator | 00:01:22.526 STDOUT terraform:  } 2025-04-17 00:01:22.526739 | orchestrator | 00:01:22.526 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-17 00:01:22.526758 | orchestrator | 00:01:22.526 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.526778 | orchestrator | 00:01:22.526 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.529629 | orchestrator | 00:01:22.526 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.529658 | orchestrator | 00:01:22.526 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.529664 | orchestrator | 00:01:22.526 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.529670 | orchestrator | 00:01:22.526 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.529683 | orchestrator | 00:01:22.526 STDOUT terraform:  } 2025-04-17 00:01:22.529689 | orchestrator | 00:01:22.526 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-17 00:01:22.529695 | orchestrator | 00:01:22.526 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.529701 | orchestrator | 00:01:22.526 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.529706 | orchestrator | 00:01:22.526 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.529712 | orchestrator | 00:01:22.527 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.529717 | orchestrator | 00:01:22.527 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.529723 | orchestrator | 00:01:22.527 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.529731 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-04-17 00:01:22.529736 | orchestrator | 00:01:22.527 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-17 00:01:22.529742 | orchestrator | 00:01:22.527 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.529747 | orchestrator | 00:01:22.527 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.529753 | orchestrator | 00:01:22.527 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.529758 | orchestrator | 00:01:22.527 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.529764 | orchestrator | 00:01:22.527 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.529769 | orchestrator | 00:01:22.527 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.529774 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-04-17 00:01:22.529780 | orchestrator | 00:01:22.527 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-17 00:01:22.529786 | orchestrator | 00:01:22.527 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.529791 | orchestrator | 00:01:22.527 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.529797 | orchestrator | 00:01:22.527 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.529802 | orchestrator | 00:01:22.527 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.529807 | orchestrator | 00:01:22.527 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.529813 | orchestrator | 00:01:22.527 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.529818 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-04-17 00:01:22.529824 | orchestrator | 00:01:22.527 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-17 00:01:22.529829 | orchestrator | 00:01:22.527 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-17 00:01:22.529835 | orchestrator | 00:01:22.527 STDOUT terraform:  + device = (known after apply) 2025-04-17 00:01:22.529840 | orchestrator | 00:01:22.527 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.529848 | orchestrator | 00:01:22.527 STDOUT terraform:  + instance_id = (known after apply) 2025-04-17 00:01:22.529855 | orchestrator | 00:01:22.527 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.529860 | orchestrator | 00:01:22.527 STDOUT terraform:  + volume_id = (known after apply) 2025-04-17 00:01:22.529866 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-04-17 00:01:22.529875 | orchestrator | 00:01:22.527 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-17 00:01:22.529882 | orchestrator | 00:01:22.527 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-17 00:01:22.529887 | orchestrator | 00:01:22.527 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-17 00:01:22.529892 | orchestrator | 00:01:22.527 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-17 00:01:22.529898 | orchestrator | 00:01:22.527 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.529903 | orchestrator | 00:01:22.527 STDOUT terraform:  + port_id = (known after apply) 2025-04-17 00:01:22.529914 | orchestrator | 00:01:22.527 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.529920 | orchestrator | 00:01:22.527 STDOUT terraform:  } 2025-04-17 00:01:22.529925 | orchestrator | 00:01:22.527 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-17 00:01:22.529931 | orchestrator | 00:01:22.528 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-17 00:01:22.529936 | orchestrator | 00:01:22.528 STDOUT terraform:  + address = (known after apply) 2025-04-17 00:01:22.529941 | orchestrator | 00:01:22.528 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.529947 | orchestrator | 00:01:22.528 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-17 00:01:22.529952 | orchestrator | 00:01:22.528 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.529958 | orchestrator | 00:01:22.528 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-17 00:01:22.529963 | orchestrator | 00:01:22.528 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.529969 | orchestrator | 00:01:22.528 STDOUT terraform:  + pool = "public" 2025-04-17 00:01:22.529974 | orchestrator | 00:01:22.528 STDOUT terraform:  + port_id = (known after apply) 2025-04-17 00:01:22.529980 | orchestrator | 00:01:22.528 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.529985 | orchestrator | 00:01:22.528 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.529990 | orchestrator | 00:01:22.528 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.529996 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-04-17 00:01:22.530001 | orchestrator | 00:01:22.528 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-17 00:01:22.530006 | orchestrator | 00:01:22.528 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-17 00:01:22.530046 | orchestrator | 00:01:22.528 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.530053 | orchestrator | 00:01:22.528 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.530064 | orchestrator | 00:01:22.528 STDOUT terraform:  + availability_zone_hints = [ 2025-04-17 00:01:22.530098 | orchestrator | 00:01:22.528 STDOUT terraform:  + "nova", 2025-04-17 00:01:22.530104 | orchestrator | 00:01:22.528 STDOUT terraform:  ] 2025-04-17 00:01:22.530109 | orchestrator | 00:01:22.528 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-17 00:01:22.530114 | orchestrator | 00:01:22.528 STDOUT terraform:  + external = (known after apply) 2025-04-17 00:01:22.530120 | orchestrator | 00:01:22.528 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.530125 | orchestrator | 00:01:22.528 STDOUT terraform:  + mtu = (known after apply) 2025-04-17 00:01:22.530131 | orchestrator | 00:01:22.528 STDOUT terraform:  + name = "net-testbed-management" 2025-04-17 00:01:22.530136 | orchestrator | 00:01:22.528 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.530141 | orchestrator | 00:01:22.528 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.530147 | orchestrator | 00:01:22.528 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.530157 | orchestrator | 00:01:22.528 STDOUT terraform:  + shared = (known after apply) 2025-04-17 00:01:22.530163 | orchestrator | 00:01:22.528 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.530168 | orchestrator | 00:01:22.528 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-17 00:01:22.530174 | orchestrator | 00:01:22.528 STDOUT terraform:  + segments (known after apply) 2025-04-17 00:01:22.530179 | orchestrator | 00:01:22.528 STDOUT terraform:  } 2025-04-17 00:01:22.530184 | orchestrator | 00:01:22.528 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-17 00:01:22.530189 | orchestrator | 00:01:22.528 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-17 00:01:22.530194 | orchestrator | 00:01:22.528 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.530198 | orchestrator | 00:01:22.528 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-17 00:01:22.530203 | orchestrator | 00:01:22.529 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-17 00:01:22.530208 | orchestrator | 00:01:22.529 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.530213 | orchestrator | 00:01:22.529 STDOUT terraform:  + device_id = (known after apply) 2025-04-17 00:01:22.530218 | orchestrator | 00:01:22.529 STDOUT terraform:  + device_owner = (known after apply) 2025-04-17 00:01:22.530222 | orchestrator | 00:01:22.529 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-17 00:01:22.530227 | orchestrator | 00:01:22.529 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.530232 | orchestrator | 00:01:22.529 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.530237 | orchestrator | 00:01:22.529 STDOUT terraform:  + mac_address = (known after apply) 2025-04-17 00:01:22.530245 | orchestrator | 00:01:22.529 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.530253 | orchestrator | 00:01:22.529 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.530258 | orchestrator | 00:01:22.529 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.530263 | orchestrator | 00:01:22.529 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.530268 | orchestrator | 00:01:22.529 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-17 00:01:22.530273 | orchestrator | 00:01:22.529 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.530277 | orchestrator | 00:01:22.529 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.530282 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-17 00:01:22.530287 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-04-17 00:01:22.530292 | orchestrator | 00:01:22.529 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.530297 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-17 00:01:22.530302 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-04-17 00:01:22.530307 | orchestrator | 00:01:22.529 STDOUT terraform:  + binding (known after apply) 2025-04-17 00:01:22.530312 | orchestrator | 00:01:22.529 STDOUT terraform:  + fixed_ip { 2025-04-17 00:01:22.530317 | orchestrator | 00:01:22.529 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-17 00:01:22.530321 | orchestrator | 00:01:22.529 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.530326 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-04-17 00:01:22.530331 | orchestrator | 00:01:22.529 STDOUT terraform:  } 2025-04-17 00:01:22.530336 | orchestrator | 00:01:22.529 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-04-17 00:01:22.530341 | orchestrator | 00:01:22.529 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-17 00:01:22.530349 | orchestrator | 00:01:22.529 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.530534 | orchestrator | 00:01:22.529 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-17 00:01:22.530546 | orchestrator | 00:01:22.529 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-17 00:01:22.530565 | orchestrator | 00:01:22.530 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.530620 | orchestrator | 00:01:22.530 STDOUT terraform:  + device_id = (known after apply) 2025-04-17 00:01:22.530651 | orchestrator | 00:01:22.530 STDOUT terraform:  + device_owner = (known after apply) 2025-04-17 00:01:22.530686 | orchestrator | 00:01:22.530 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-17 00:01:22.530722 | orchestrator | 00:01:22.530 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.530759 | orchestrator | 00:01:22.530 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.530794 | orchestrator | 00:01:22.530 STDOUT terraform:  + mac_address = (known after apply) 2025-04-17 00:01:22.530829 | orchestrator | 00:01:22.530 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.530863 | orchestrator | 00:01:22.530 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.530900 | orchestrator | 00:01:22.530 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.530935 | orchestrator | 00:01:22.530 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.530970 | orchestrator | 00:01:22.530 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-17 00:01:22.531005 | orchestrator | 00:01:22.530 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.531026 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.531055 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-17 00:01:22.531061 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-04-17 00:01:22.531102 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.531132 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-17 00:01:22.531138 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-04-17 00:01:22.531162 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.531189 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-17 00:01:22.531196 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-04-17 00:01:22.531218 | orchestrator | 00:01:22.531 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.531245 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-17 00:01:22.531252 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-04-17 00:01:22.531278 | orchestrator | 00:01:22.531 STDOUT terraform:  + binding (known after apply) 2025-04-17 00:01:22.531285 | orchestrator | 00:01:22.531 STDOUT terraform:  + fixed_ip { 2025-04-17 00:01:22.531312 | orchestrator | 00:01:22.531 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-17 00:01:22.531340 | orchestrator | 00:01:22.531 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.531347 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-04-17 00:01:22.531363 | orchestrator | 00:01:22.531 STDOUT terraform:  } 2025-04-17 00:01:22.531409 | orchestrator | 00:01:22.531 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-17 00:01:22.531453 | orchestrator | 00:01:22.531 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-17 00:01:22.531489 | orchestrator | 00:01:22.531 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.531525 | orchestrator | 00:01:22.531 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-17 00:01:22.531559 | orchestrator | 00:01:22.531 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-17 00:01:22.531597 | orchestrator | 00:01:22.531 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.531630 | orchestrator | 00:01:22.531 STDOUT terraform:  + device_id = (known after apply) 2025-04-17 00:01:22.531665 | orchestrator | 00:01:22.531 STDOUT terraform:  + device_owner = (known after apply) 2025-04-17 00:01:22.531700 | orchestrator | 00:01:22.531 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-17 00:01:22.531736 | orchestrator | 00:01:22.531 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.531773 | orchestrator | 00:01:22.531 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.531808 | orchestrator | 00:01:22.531 STDOUT terraform:  + mac_address = (known after apply) 2025-04-17 00:01:22.531843 | orchestrator | 00:01:22.531 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.531877 | orchestrator | 00:01:22.531 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.531912 | orchestrator | 00:01:22.531 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.531948 | orchestrator | 00:01:22.531 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.531982 | orchestrator | 00:01:22.531 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-17 00:01:22.532017 | orchestrator | 00:01:22.531 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.532038 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.532073 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-17 00:01:22.532081 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-04-17 00:01:22.532102 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.532131 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-17 00:01:22.532138 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-04-17 00:01:22.532159 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.532187 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-17 00:01:22.532194 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-04-17 00:01:22.532215 | orchestrator | 00:01:22.532 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.532243 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-17 00:01:22.532249 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-04-17 00:01:22.532276 | orchestrator | 00:01:22.532 STDOUT terraform:  + binding (known after apply) 2025-04-17 00:01:22.532283 | orchestrator | 00:01:22.532 STDOUT terraform:  + fixed_ip { 2025-04-17 00:01:22.532311 | orchestrator | 00:01:22.532 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-17 00:01:22.532340 | orchestrator | 00:01:22.532 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.532346 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-04-17 00:01:22.532362 | orchestrator | 00:01:22.532 STDOUT terraform:  } 2025-04-17 00:01:22.532407 | orchestrator | 00:01:22.532 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-17 00:01:22.532451 | orchestrator | 00:01:22.532 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-17 00:01:22.532487 | orchestrator | 00:01:22.532 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.532521 | orchestrator | 00:01:22.532 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-17 00:01:22.532556 | orchestrator | 00:01:22.532 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-17 00:01:22.532591 | orchestrator | 00:01:22.532 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.532626 | orchestrator | 00:01:22.532 STDOUT terraform:  + device_id = (known after apply) 2025-04-17 00:01:22.532663 | orchestrator | 00:01:22.532 STDOUT terraform:  + device_owner = (known after apply) 2025-04-17 00:01:22.532697 | orchestrator | 00:01:22.532 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-17 00:01:22.532732 | orchestrator | 00:01:22.532 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.532770 | orchestrator | 00:01:22.532 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.532804 | orchestrator | 00:01:22.532 STDOUT terraform:  + mac_address = (known after apply) 2025-04-17 00:01:22.532838 | orchestrator | 00:01:22.532 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.532873 | orchestrator | 00:01:22.532 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.532908 | orchestrator | 00:01:22.532 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.532945 | orchestrator | 00:01:22.532 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.532977 | orchestrator | 00:01:22.532 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-17 00:01:22.533012 | orchestrator | 00:01:22.532 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.533031 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.533059 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-17 00:01:22.533086 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-04-17 00:01:22.533094 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.533121 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-17 00:01:22.533128 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-04-17 00:01:22.533150 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.533177 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-17 00:01:22.533184 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-04-17 00:01:22.533206 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.533235 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-17 00:01:22.533242 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-04-17 00:01:22.533267 | orchestrator | 00:01:22.533 STDOUT terraform:  + binding (known after apply) 2025-04-17 00:01:22.533283 | orchestrator | 00:01:22.533 STDOUT terraform:  + fixed_ip { 2025-04-17 00:01:22.533307 | orchestrator | 00:01:22.533 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-17 00:01:22.533335 | orchestrator | 00:01:22.533 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.533342 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-04-17 00:01:22.533378 | orchestrator | 00:01:22.533 STDOUT terraform:  } 2025-04-17 00:01:22.533401 | orchestrator | 00:01:22.533 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-17 00:01:22.533444 | orchestrator | 00:01:22.533 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-17 00:01:22.533479 | orchestrator | 00:01:22.533 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.533514 | orchestrator | 00:01:22.533 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-17 00:01:22.533549 | orchestrator | 00:01:22.533 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-17 00:01:22.533585 | orchestrator | 00:01:22.533 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.533619 | orchestrator | 00:01:22.533 STDOUT terraform:  + device_id = (known after apply) 2025-04-17 00:01:22.533654 | orchestrator | 00:01:22.533 STDOUT terraform:  + device_owner = (known after apply) 2025-04-17 00:01:22.533689 | orchestrator | 00:01:22.533 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-17 00:01:22.533724 | orchestrator | 00:01:22.533 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.533760 | orchestrator | 00:01:22.533 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.533795 | orchestrator | 00:01:22.533 STDOUT terraform:  + mac_address = (known after apply) 2025-04-17 00:01:22.533831 | orchestrator | 00:01:22.533 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.533865 | orchestrator | 00:01:22.533 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.533901 | orchestrator | 00:01:22.533 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.533936 | orchestrator | 00:01:22.533 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.533970 | orchestrator | 00:01:22.533 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-17 00:01:22.534005 | orchestrator | 00:01:22.533 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.534038 | orchestrator | 00:01:22.533 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.534074 | orchestrator | 00:01:22.534 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-17 00:01:22.534081 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-04-17 00:01:22.534102 | orchestrator | 00:01:22.534 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.534131 | orchestrator | 00:01:22.534 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-17 00:01:22.534146 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-04-17 00:01:22.534172 | orchestrator | 00:01:22.534 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.534188 | orchestrator | 00:01:22.534 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-17 00:01:22.534195 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-04-17 00:01:22.534217 | orchestrator | 00:01:22.534 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.534244 | orchestrator | 00:01:22.534 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-17 00:01:22.534258 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-04-17 00:01:22.534281 | orchestrator | 00:01:22.534 STDOUT terraform:  + binding (known after apply) 2025-04-17 00:01:22.534295 | orchestrator | 00:01:22.534 STDOUT terraform:  + fixed_ip { 2025-04-17 00:01:22.534320 | orchestrator | 00:01:22.534 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-17 00:01:22.534348 | orchestrator | 00:01:22.534 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.534355 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-04-17 00:01:22.534370 | orchestrator | 00:01:22.534 STDOUT terraform:  } 2025-04-17 00:01:22.534416 | orchestrator | 00:01:22.534 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-17 00:01:22.534460 | orchestrator | 00:01:22.534 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-17 00:01:22.534495 | orchestrator | 00:01:22.534 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.534530 | orchestrator | 00:01:22.534 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-17 00:01:22.534563 | orchestrator | 00:01:22.534 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-17 00:01:22.534600 | orchestrator | 00:01:22.534 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.534965 | orchestrator | 00:01:22.534 STDOUT terraform:  + device_id = (known after apply) 2025-04-17 00:01:22.535166 | orchestrator | 00:01:22.534 STDOUT terraform:  + device_owner = (known after apply) 2025-04-17 00:01:22.535182 | orchestrator | 00:01:22.534 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-17 00:01:22.535192 | orchestrator | 00:01:22.534 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.535197 | orchestrator | 00:01:22.534 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.535202 | orchestrator | 00:01:22.534 STDOUT terraform:  + mac_address = (known after apply) 2025-04-17 00:01:22.535207 | orchestrator | 00:01:22.534 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.535212 | orchestrator | 00:01:22.534 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.535217 | orchestrator | 00:01:22.534 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.535222 | orchestrator | 00:01:22.534 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.535231 | orchestrator | 00:01:22.534 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-17 00:01:22.535262 | orchestrator | 00:01:22.535 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.535269 | orchestrator | 00:01:22.535 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.535274 | orchestrator | 00:01:22.535 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-17 00:01:22.535282 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-04-17 00:01:22.535298 | orchestrator | 00:01:22.535 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.535304 | orchestrator | 00:01:22.535 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-17 00:01:22.535309 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-04-17 00:01:22.535314 | orchestrator | 00:01:22.535 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.535320 | orchestrator | 00:01:22.535 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-17 00:01:22.535343 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-04-17 00:01:22.535349 | orchestrator | 00:01:22.535 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.535354 | orchestrator | 00:01:22.535 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-17 00:01:22.535359 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-04-17 00:01:22.535365 | orchestrator | 00:01:22.535 STDOUT terraform:  + binding (known after apply) 2025-04-17 00:01:22.535370 | orchestrator | 00:01:22.535 STDOUT terraform:  + fixed_ip { 2025-04-17 00:01:22.535377 | orchestrator | 00:01:22.535 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-17 00:01:22.535403 | orchestrator | 00:01:22.535 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.535418 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-04-17 00:01:22.535424 | orchestrator | 00:01:22.535 STDOUT terraform:  } 2025-04-17 00:01:22.535471 | orchestrator | 00:01:22.535 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-17 00:01:22.535515 | orchestrator | 00:01:22.535 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-17 00:01:22.535550 | orchestrator | 00:01:22.535 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.535585 | orchestrator | 00:01:22.535 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-17 00:01:22.535619 | orchestrator | 00:01:22.535 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-17 00:01:22.535656 | orchestrator | 00:01:22.535 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.535691 | orchestrator | 00:01:22.535 STDOUT terraform:  + device_id = (known after apply) 2025-04-17 00:01:22.535726 | orchestrator | 00:01:22.535 STDOUT terraform:  + device_owner = (known after apply) 2025-04-17 00:01:22.535740 | orchestrator | 00:01:22.535 STDOUT terraform:  + dns_ass 2025-04-17 00:01:22.535793 | orchestrator | 00:01:22.535 STDOUT terraform: ignment = (known after apply) 2025-04-17 00:01:22.535829 | orchestrator | 00:01:22.535 STDOUT terraform:  + dns_name = (known after apply) 2025-04-17 00:01:22.535865 | orchestrator | 00:01:22.535 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.535901 | orchestrator | 00:01:22.535 STDOUT terraform:  + mac_address = (known after apply) 2025-04-17 00:01:22.535936 | orchestrator | 00:01:22.535 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.535971 | orchestrator | 00:01:22.535 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-17 00:01:22.536007 | orchestrator | 00:01:22.535 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-17 00:01:22.536042 | orchestrator | 00:01:22.536 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.536084 | orchestrator | 00:01:22.536 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-17 00:01:22.536119 | orchestrator | 00:01:22.536 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.536142 | orchestrator | 00:01:22.536 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.536169 | orchestrator | 00:01:22.536 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-17 00:01:22.536181 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-04-17 00:01:22.536190 | orchestrator | 00:01:22.536 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.536219 | orchestrator | 00:01:22.536 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-17 00:01:22.536233 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-04-17 00:01:22.536252 | orchestrator | 00:01:22.536 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.536281 | orchestrator | 00:01:22.536 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-17 00:01:22.536296 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-04-17 00:01:22.536314 | orchestrator | 00:01:22.536 STDOUT terraform:  + allowed_address_pairs { 2025-04-17 00:01:22.536341 | orchestrator | 00:01:22.536 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-17 00:01:22.536348 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-04-17 00:01:22.536381 | orchestrator | 00:01:22.536 STDOUT terraform:  + binding (known after apply) 2025-04-17 00:01:22.536396 | orchestrator | 00:01:22.536 STDOUT terraform:  + fixed_ip { 2025-04-17 00:01:22.536422 | orchestrator | 00:01:22.536 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-17 00:01:22.536448 | orchestrator | 00:01:22.536 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.536455 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-04-17 00:01:22.536470 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-04-17 00:01:22.536517 | orchestrator | 00:01:22.536 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-17 00:01:22.536566 | orchestrator | 00:01:22.536 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-17 00:01:22.536583 | orchestrator | 00:01:22.536 STDOUT terraform:  + force_destroy = false 2025-04-17 00:01:22.536613 | orchestrator | 00:01:22.536 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.536640 | orchestrator | 00:01:22.536 STDOUT terraform:  + port_id = (known after apply) 2025-04-17 00:01:22.536669 | orchestrator | 00:01:22.536 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.536696 | orchestrator | 00:01:22.536 STDOUT terraform:  + router_id = (known after apply) 2025-04-17 00:01:22.536724 | orchestrator | 00:01:22.536 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-17 00:01:22.536739 | orchestrator | 00:01:22.536 STDOUT terraform:  } 2025-04-17 00:01:22.536774 | orchestrator | 00:01:22.536 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-17 00:01:22.536811 | orchestrator | 00:01:22.536 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-17 00:01:22.536844 | orchestrator | 00:01:22.536 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-17 00:01:22.536880 | orchestrator | 00:01:22.536 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.536904 | orchestrator | 00:01:22.536 STDOUT terraform:  + availability_zone_hints = [ 2025-04-17 00:01:22.536918 | orchestrator | 00:01:22.536 STDOUT terraform:  + "nova", 2025-04-17 00:01:22.536932 | orchestrator | 00:01:22.536 STDOUT terraform:  ] 2025-04-17 00:01:22.536967 | orchestrator | 00:01:22.536 STDOUT terraform:  + distributed = (known after apply) 2025-04-17 00:01:22.537004 | orchestrator | 00:01:22.536 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-17 00:01:22.537053 | orchestrator | 00:01:22.537 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-17 00:01:22.537097 | orchestrator | 00:01:22.537 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.537126 | orchestrator | 00:01:22.537 STDOUT terraform:  + name = "testbed" 2025-04-17 00:01:22.537162 | orchestrator | 00:01:22.537 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.537197 | orchestrator | 00:01:22.537 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.537225 | orchestrator | 00:01:22.537 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-17 00:01:22.537232 | orchestrator | 00:01:22.537 STDOUT terraform:  } 2025-04-17 00:01:22.537318 | orchestrator | 00:01:22.537 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-17 00:01:22.537356 | orchestrator | 00:01:22.537 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-17 00:01:22.537375 | orchestrator | 00:01:22.537 STDOUT terraform:  + description = "ssh" 2025-04-17 00:01:22.537399 | orchestrator | 00:01:22.537 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.537420 | orchestrator | 00:01:22.537 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.537451 | orchestrator | 00:01:22.537 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.537470 | orchestrator | 00:01:22.537 STDOUT terraform:  + port_range_max = 22 2025-04-17 00:01:22.537490 | orchestrator | 00:01:22.537 STDOUT terraform:  + port_range_min = 22 2025-04-17 00:01:22.537511 | orchestrator | 00:01:22.537 STDOUT terraform:  + protocol = "tcp" 2025-04-17 00:01:22.537540 | orchestrator | 00:01:22.537 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.537570 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.537594 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-17 00:01:22.537624 | orchestrator | 00:01:22.537 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.537653 | orchestrator | 00:01:22.537 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.537660 | orchestrator | 00:01:22.537 STDOUT terraform:  } 2025-04-17 00:01:22.537714 | orchestrator | 00:01:22.537 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-17 00:01:22.537765 | orchestrator | 00:01:22.537 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-17 00:01:22.537789 | orchestrator | 00:01:22.537 STDOUT terraform:  + description = "wireguard" 2025-04-17 00:01:22.537812 | orchestrator | 00:01:22.537 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.537832 | orchestrator | 00:01:22.537 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.537865 | orchestrator | 00:01:22.537 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.537886 | orchestrator | 00:01:22.537 STDOUT terraform:  + port_range_max = 51820 2025-04-17 00:01:22.537905 | orchestrator | 00:01:22.537 STDOUT terraform:  + port_range_min = 51820 2025-04-17 00:01:22.537926 | orchestrator | 00:01:22.537 STDOUT terraform:  + protocol = "udp" 2025-04-17 00:01:22.537956 | orchestrator | 00:01:22.537 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.537985 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.538008 | orchestrator | 00:01:22.537 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-17 00:01:22.538053 | orchestrator | 00:01:22.538 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.538087 | orchestrator | 00:01:22.538 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.538095 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-04-17 00:01:22.538149 | orchestrator | 00:01:22.538 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-17 00:01:22.538201 | orchestrator | 00:01:22.538 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-17 00:01:22.538238 | orchestrator | 00:01:22.538 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.538260 | orchestrator | 00:01:22.538 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.538290 | orchestrator | 00:01:22.538 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.538310 | orchestrator | 00:01:22.538 STDOUT terraform:  + protocol = "tcp" 2025-04-17 00:01:22.538340 | orchestrator | 00:01:22.538 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.538370 | orchestrator | 00:01:22.538 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.538399 | orchestrator | 00:01:22.538 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-17 00:01:22.538428 | orchestrator | 00:01:22.538 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.538459 | orchestrator | 00:01:22.538 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.538465 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-04-17 00:01:22.538520 | orchestrator | 00:01:22.538 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-17 00:01:22.538573 | orchestrator | 00:01:22.538 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-17 00:01:22.538598 | orchestrator | 00:01:22.538 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.538618 | orchestrator | 00:01:22.538 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.538649 | orchestrator | 00:01:22.538 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.538671 | orchestrator | 00:01:22.538 STDOUT terraform:  + protocol = "udp" 2025-04-17 00:01:22.538699 | orchestrator | 00:01:22.538 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.538731 | orchestrator | 00:01:22.538 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.538758 | orchestrator | 00:01:22.538 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-17 00:01:22.538787 | orchestrator | 00:01:22.538 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.538816 | orchestrator | 00:01:22.538 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.538823 | orchestrator | 00:01:22.538 STDOUT terraform:  } 2025-04-17 00:01:22.538878 | orchestrator | 00:01:22.538 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-17 00:01:22.538930 | orchestrator | 00:01:22.538 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-17 00:01:22.538953 | orchestrator | 00:01:22.538 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.538973 | orchestrator | 00:01:22.538 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.539004 | orchestrator | 00:01:22.538 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.539024 | orchestrator | 00:01:22.539 STDOUT terraform:  + protocol = "icmp" 2025-04-17 00:01:22.539054 | orchestrator | 00:01:22.539 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.539099 | orchestrator | 00:01:22.539 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.539123 | orchestrator | 00:01:22.539 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-17 00:01:22.539152 | orchestrator | 00:01:22.539 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.539182 | orchestrator | 00:01:22.539 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.539189 | orchestrator | 00:01:22.539 STDOUT terraform:  } 2025-04-17 00:01:22.539241 | orchestrator | 00:01:22.539 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-17 00:01:22.539292 | orchestrator | 00:01:22.539 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-17 00:01:22.539316 | orchestrator | 00:01:22.539 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.539336 | orchestrator | 00:01:22.539 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.539367 | orchestrator | 00:01:22.539 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.539387 | orchestrator | 00:01:22.539 STDOUT terraform:  + protocol = "tcp" 2025-04-17 00:01:22.539417 | orchestrator | 00:01:22.539 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.539446 | orchestrator | 00:01:22.539 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.539469 | orchestrator | 00:01:22.539 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-17 00:01:22.539499 | orchestrator | 00:01:22.539 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.539529 | orchestrator | 00:01:22.539 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.539543 | orchestrator | 00:01:22.539 STDOUT terraform:  } 2025-04-17 00:01:22.539593 | orchestrator | 00:01:22.539 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-17 00:01:22.539643 | orchestrator | 00:01:22.539 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-17 00:01:22.539667 | orchestrator | 00:01:22.539 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.539687 | orchestrator | 00:01:22.539 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.539716 | orchestrator | 00:01:22.539 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.539736 | orchestrator | 00:01:22.539 STDOUT terraform:  + protocol = "udp" 2025-04-17 00:01:22.539766 | orchestrator | 00:01:22.539 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.539795 | orchestrator | 00:01:22.539 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.539819 | orchestrator | 00:01:22.539 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-17 00:01:22.539848 | orchestrator | 00:01:22.539 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.539878 | orchestrator | 00:01:22.539 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.539885 | orchestrator | 00:01:22.539 STDOUT terraform:  } 2025-04-17 00:01:22.539937 | orchestrator | 00:01:22.539 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-17 00:01:22.539986 | orchestrator | 00:01:22.539 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-17 00:01:22.540013 | orchestrator | 00:01:22.539 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.540033 | orchestrator | 00:01:22.540 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.540063 | orchestrator | 00:01:22.540 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.540142 | orchestrator | 00:01:22.540 STDOUT terraform:  + protocol = "icmp" 2025-04-17 00:01:22.540162 | orchestrator | 00:01:22.540 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.540169 | orchestrator | 00:01:22.540 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.540218 | orchestrator | 00:01:22.540 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-17 00:01:22.540235 | orchestrator | 00:01:22.540 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.540278 | orchestrator | 00:01:22.540 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.540284 | orchestrator | 00:01:22.540 STDOUT terraform:  } 2025-04-17 00:01:22.540291 | orchestrator | 00:01:22.540 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-17 00:01:22.540330 | orchestrator | 00:01:22.540 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-17 00:01:22.540350 | orchestrator | 00:01:22.540 STDOUT terraform:  + description = "vrrp" 2025-04-17 00:01:22.540373 | orchestrator | 00:01:22.540 STDOUT terraform:  + direction = "ingress" 2025-04-17 00:01:22.540393 | orchestrator | 00:01:22.540 STDOUT terraform:  + ethertype = "IPv4" 2025-04-17 00:01:22.540424 | orchestrator | 00:01:22.540 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.540467 | orchestrator | 00:01:22.540 STDOUT terraform:  + protocol = "112" 2025-04-17 00:01:22.540497 | orchestrator | 00:01:22.540 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.540526 | orchestrator | 00:01:22.540 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-17 00:01:22.540550 | orchestrator | 00:01:22.540 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-17 00:01:22.540581 | orchestrator | 00:01:22.540 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-17 00:01:22.540609 | orchestrator | 00:01:22.540 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.540616 | orchestrator | 00:01:22.540 STDOUT terraform:  } 2025-04-17 00:01:22.540667 | orchestrator | 00:01:22.540 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-17 00:01:22.540714 | orchestrator | 00:01:22.540 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-17 00:01:22.540742 | orchestrator | 00:01:22.540 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.540775 | orchestrator | 00:01:22.540 STDOUT terraform:  + description = "management security group" 2025-04-17 00:01:22.540803 | orchestrator | 00:01:22.540 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.540831 | orchestrator | 00:01:22.540 STDOUT terraform:  + name = "testbed-management" 2025-04-17 00:01:22.540859 | orchestrator | 00:01:22.540 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.540887 | orchestrator | 00:01:22.540 STDOUT terraform:  + stateful = (known after apply) 2025-04-17 00:01:22.540918 | orchestrator | 00:01:22.540 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.540925 | orchestrator | 00:01:22.540 STDOUT terraform:  } 2025-04-17 00:01:22.540969 | orchestrator | 00:01:22.540 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-17 00:01:22.541014 | orchestrator | 00:01:22.540 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-17 00:01:22.541041 | orchestrator | 00:01:22.541 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.541078 | orchestrator | 00:01:22.541 STDOUT terraform:  + description = "node security group" 2025-04-17 00:01:22.541112 | orchestrator | 00:01:22.541 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.541135 | orchestrator | 00:01:22.541 STDOUT terraform:  + name = "testbed-node" 2025-04-17 00:01:22.541163 | orchestrator | 00:01:22.541 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.541191 | orchestrator | 00:01:22.541 STDOUT terraform:  + stateful = (known after apply) 2025-04-17 00:01:22.541219 | orchestrator | 00:01:22.541 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.541225 | orchestrator | 00:01:22.541 STDOUT terraform:  } 2025-04-17 00:01:22.541272 | orchestrator | 00:01:22.541 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-17 00:01:22.541315 | orchestrator | 00:01:22.541 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-17 00:01:22.541345 | orchestrator | 00:01:22.541 STDOUT terraform:  + all_tags = (known after apply) 2025-04-17 00:01:22.541374 | orchestrator | 00:01:22.541 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-17 00:01:22.541393 | orchestrator | 00:01:22.541 STDOUT terraform:  + dns_nameservers = [ 2025-04-17 00:01:22.541409 | orchestrator | 00:01:22.541 STDOUT terraform:  + "8.8.8.8", 2025-04-17 00:01:22.541424 | orchestrator | 00:01:22.541 STDOUT terraform:  + "9.9.9.9", 2025-04-17 00:01:22.541439 | orchestrator | 00:01:22.541 STDOUT terraform:  ] 2025-04-17 00:01:22.541458 | orchestrator | 00:01:22.541 STDOUT terraform:  + enable_dhcp = true 2025-04-17 00:01:22.541488 | orchestrator | 00:01:22.541 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-17 00:01:22.541519 | orchestrator | 00:01:22.541 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.541538 | orchestrator | 00:01:22.541 STDOUT terraform:  + ip_version = 4 2025-04-17 00:01:22.541567 | orchestrator | 00:01:22.541 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-17 00:01:22.541596 | orchestrator | 00:01:22.541 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-17 00:01:22.541633 | orchestrator | 00:01:22.541 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-17 00:01:22.541662 | orchestrator | 00:01:22.541 STDOUT terraform:  + network_id = (known after apply) 2025-04-17 00:01:22.541681 | orchestrator | 00:01:22.541 STDOUT terraform:  + no_gateway = false 2025-04-17 00:01:22.541711 | orchestrator | 00:01:22.541 STDOUT terraform:  + region = (known after apply) 2025-04-17 00:01:22.541740 | orchestrator | 00:01:22.541 STDOUT terraform:  + service_types = (known after apply) 2025-04-17 00:01:22.541769 | orchestrator | 00:01:22.541 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-17 00:01:22.541788 | orchestrator | 00:01:22.541 STDOUT terraform:  + allocation_pool { 2025-04-17 00:01:22.541813 | orchestrator | 00:01:22.541 STDOUT terraform:  + end = "192.168.31.250" 2025-04-17 00:01:22.541837 | orchestrator | 00:01:22.541 STDOUT terraform:  + start = "192.168.31.200" 2025-04-17 00:01:22.541851 | orchestrator | 00:01:22.541 STDOUT terraform:  } 2025-04-17 00:01:22.541858 | orchestrator | 00:01:22.541 STDOUT terraform:  } 2025-04-17 00:01:22.541883 | orchestrator | 00:01:22.541 STDOUT terraform:  # terraform_data.image will be created 2025-04-17 00:01:22.541906 | orchestrator | 00:01:22.541 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-17 00:01:22.541929 | orchestrator | 00:01:22.541 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.541949 | orchestrator | 00:01:22.541 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-17 00:01:22.541972 | orchestrator | 00:01:22.541 STDOUT terraform:  + output = (known after apply) 2025-04-17 00:01:22.541986 | orchestrator | 00:01:22.541 STDOUT terraform:  } 2025-04-17 00:01:22.542026 | orchestrator | 00:01:22.541 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-17 00:01:22.542052 | orchestrator | 00:01:22.542 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-17 00:01:22.542084 | orchestrator | 00:01:22.542 STDOUT terraform:  + id = (known after apply) 2025-04-17 00:01:22.542102 | orchestrator | 00:01:22.542 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-17 00:01:22.542126 | orchestrator | 00:01:22.542 STDOUT terraform:  + output = (known after apply) 2025-04-17 00:01:22.542133 | orchestrator | 00:01:22.542 STDOUT terraform:  } 2025-04-17 00:01:22.542164 | orchestrator | 00:01:22.542 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-17 00:01:22.542177 | orchestrator | 00:01:22.542 STDOUT terraform: Changes to Outputs: 2025-04-17 00:01:22.542202 | orchestrator | 00:01:22.542 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-17 00:01:22.542225 | orchestrator | 00:01:22.542 STDOUT terraform:  + private_key = (sensitive value) 2025-04-17 00:01:22.648232 | orchestrator | 00:01:22.648 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-17 00:01:22.719884 | orchestrator | 00:01:22.719 STDOUT terraform: terraform_data.image: Creating... 2025-04-17 00:01:22.719979 | orchestrator | 00:01:22.719 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=cd66606f-e598-26ac-8b8b-9de983535c45] 2025-04-17 00:01:22.720004 | orchestrator | 00:01:22.719 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=153dc066-1759-cc5f-af8e-17e9381a8199] 2025-04-17 00:01:22.732389 | orchestrator | 00:01:22.732 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-17 00:01:22.733120 | orchestrator | 00:01:22.733 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-17 00:01:22.736364 | orchestrator | 00:01:22.736 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-17 00:01:22.742603 | orchestrator | 00:01:22.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-17 00:01:22.744493 | orchestrator | 00:01:22.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-17 00:01:22.744536 | orchestrator | 00:01:22.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-17 00:01:22.744565 | orchestrator | 00:01:22.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-17 00:01:22.744698 | orchestrator | 00:01:22.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-17 00:01:22.744893 | orchestrator | 00:01:22.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-17 00:01:22.745958 | orchestrator | 00:01:22.745 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-17 00:01:23.176980 | orchestrator | 00:01:23.176 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-17 00:01:23.182339 | orchestrator | 00:01:23.182 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-17 00:01:23.455479 | orchestrator | 00:01:23.455 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-04-17 00:01:23.461842 | orchestrator | 00:01:23.461 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-17 00:01:23.505001 | orchestrator | 00:01:23.504 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-17 00:01:23.510993 | orchestrator | 00:01:23.510 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-17 00:01:28.553934 | orchestrator | 00:01:28.553 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=10a45a6c-bc62-4365-8b1b-03e9e7fe2227] 2025-04-17 00:01:28.561035 | orchestrator | 00:01:28.560 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-17 00:01:32.744510 | orchestrator | 00:01:32.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-17 00:01:32.744670 | orchestrator | 00:01:32.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-17 00:01:32.745516 | orchestrator | 00:01:32.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-17 00:01:32.745661 | orchestrator | 00:01:32.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-17 00:01:32.746815 | orchestrator | 00:01:32.746 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-17 00:01:32.746924 | orchestrator | 00:01:32.746 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-17 00:01:33.184179 | orchestrator | 00:01:33.183 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-17 00:01:33.306620 | orchestrator | 00:01:33.306 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=e9224846-b1ba-4847-a73a-6715887089fb] 2025-04-17 00:01:33.313355 | orchestrator | 00:01:33.313 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-17 00:01:33.328216 | orchestrator | 00:01:33.327 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=19544825-ba43-4bb1-8c25-64db59cc98e2] 2025-04-17 00:01:33.340488 | orchestrator | 00:01:33.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-17 00:01:33.344450 | orchestrator | 00:01:33.344 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=d0102145-e326-42a8-9189-9b289697f2f1] 2025-04-17 00:01:33.348401 | orchestrator | 00:01:33.348 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-17 00:01:33.353243 | orchestrator | 00:01:33.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=6e4fe9eb-5e43-4aa2-9b37-d2398fe01f7b] 2025-04-17 00:01:33.358450 | orchestrator | 00:01:33.358 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-17 00:01:33.366578 | orchestrator | 00:01:33.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=2cc5c4f7-7927-43eb-bfd2-3f01b9eb04d9] 2025-04-17 00:01:33.371156 | orchestrator | 00:01:33.370 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-17 00:01:33.388799 | orchestrator | 00:01:33.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=0367f9b0-3a71-47a7-a8bd-9e2816c4d242] 2025-04-17 00:01:33.394521 | orchestrator | 00:01:33.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-17 00:01:33.423921 | orchestrator | 00:01:33.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=c4c813ed-e09b-49ac-b96f-625695efceb2] 2025-04-17 00:01:33.430832 | orchestrator | 00:01:33.430 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-17 00:01:33.463205 | orchestrator | 00:01:33.462 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-17 00:01:33.511764 | orchestrator | 00:01:33.511 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-17 00:01:33.633707 | orchestrator | 00:01:33.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=25683219-5c0a-4b96-92c9-99d674025eb1] 2025-04-17 00:01:33.643533 | orchestrator | 00:01:33.643 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-17 00:01:34.323538 | orchestrator | 00:01:33.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=7d7eac16-cb9a-452c-8088-f21cbc7102b1] 2025-04-17 00:01:38.563431 | orchestrator | 00:01:33.713 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-17 00:01:38.563577 | orchestrator | 00:01:38.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-17 00:01:38.717882 | orchestrator | 00:01:38.717 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=d2310849-ed6c-49c2-b9e0-f9c06c6339c9] 2025-04-17 00:01:38.733133 | orchestrator | 00:01:38.732 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-17 00:01:38.737868 | orchestrator | 00:01:38.737 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=4566b7f4e436444936027fc1add2b25900b21586] 2025-04-17 00:01:38.744342 | orchestrator | 00:01:38.744 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-17 00:01:43.314271 | orchestrator | 00:01:43.313 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-17 00:01:43.341992 | orchestrator | 00:01:43.341 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-17 00:01:43.349333 | orchestrator | 00:01:43.348 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-17 00:01:43.359614 | orchestrator | 00:01:43.359 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-17 00:01:43.372386 | orchestrator | 00:01:43.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-17 00:01:43.395676 | orchestrator | 00:01:43.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-17 00:01:43.431166 | orchestrator | 00:01:43.430 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-17 00:01:43.507672 | orchestrator | 00:01:43.507 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 11s [id=8bcc068e-17b6-4e9f-accd-8ac12579d6f0] 2025-04-17 00:01:43.524815 | orchestrator | 00:01:43.524 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-17 00:01:43.533556 | orchestrator | 00:01:43.533 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=b3f80e2ae794ae0c83aa5b7fe8d0405613338148] 2025-04-17 00:01:43.537933 | orchestrator | 00:01:43.537 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=6309ce49-a4ed-4da7-82b1-29aa79f26650] 2025-04-17 00:01:43.546614 | orchestrator | 00:01:43.546 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-17 00:01:43.546976 | orchestrator | 00:01:43.546 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-17 00:01:43.570708 | orchestrator | 00:01:43.570 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=95e37f14-95e8-4165-b353-fd53fdf52cdb] 2025-04-17 00:01:43.578587 | orchestrator | 00:01:43.578 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-17 00:01:43.591529 | orchestrator | 00:01:43.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=70c2d06b-89ef-4a1b-882c-e0d752f0d1e2] 2025-04-17 00:01:43.594834 | orchestrator | 00:01:43.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=29eb77c3-a4eb-47de-bcfc-90cea0292ee8] 2025-04-17 00:01:43.596500 | orchestrator | 00:01:43.596 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-17 00:01:43.600080 | orchestrator | 00:01:43.599 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-17 00:01:43.609078 | orchestrator | 00:01:43.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=bef8d693-736b-4549-b698-ce9e87082908] 2025-04-17 00:01:43.615527 | orchestrator | 00:01:43.615 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-17 00:01:43.644803 | orchestrator | 00:01:43.644 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-17 00:01:43.651678 | orchestrator | 00:01:43.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 11s [id=42d2e0a2-f124-4e98-b4f2-6b7948e65700] 2025-04-17 00:01:43.714200 | orchestrator | 00:01:43.713 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-17 00:01:43.877517 | orchestrator | 00:01:43.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=c189cae0-1e0d-4eb8-9970-e970e21b9a89] 2025-04-17 00:01:44.066055 | orchestrator | 00:01:44.065 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6] 2025-04-17 00:01:48.745370 | orchestrator | 00:01:48.745 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-17 00:01:49.056494 | orchestrator | 00:01:49.056 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=711aec66-68ef-4548-aa3b-8fad97a96fa9] 2025-04-17 00:01:49.301988 | orchestrator | 00:01:49.301 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=cdf66c2a-fec6-40f8-9764-1041b3e5032a] 2025-04-17 00:01:49.314234 | orchestrator | 00:01:49.314 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-17 00:01:53.547323 | orchestrator | 00:01:53.546 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-17 00:01:53.547663 | orchestrator | 00:01:53.547 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-17 00:01:53.580148 | orchestrator | 00:01:53.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-17 00:01:53.597606 | orchestrator | 00:01:53.597 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-17 00:01:53.600989 | orchestrator | 00:01:53.600 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-17 00:01:53.928315 | orchestrator | 00:01:53.927 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca] 2025-04-17 00:01:53.929341 | orchestrator | 00:01:53.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a] 2025-04-17 00:01:53.931317 | orchestrator | 00:01:53.931 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=5bc17d26-642e-48ef-be74-33669ffc4589] 2025-04-17 00:01:53.974723 | orchestrator | 00:01:53.974 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=50613f91-bf61-4c9a-a390-8817e7641e38] 2025-04-17 00:01:53.999758 | orchestrator | 00:01:53.999 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96] 2025-04-17 00:01:56.134890 | orchestrator | 00:01:56.134 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=a173f13b-c435-417c-9964-76f71c3a70a3] 2025-04-17 00:01:56.141494 | orchestrator | 00:01:56.141 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-17 00:01:56.143472 | orchestrator | 00:01:56.143 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-17 00:01:56.145237 | orchestrator | 00:01:56.144 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-17 00:01:56.287542 | orchestrator | 00:01:56.287 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=19e3dc75-05c5-43f3-9bb1-b8e11fd891b3] 2025-04-17 00:01:56.294837 | orchestrator | 00:01:56.294 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-17 00:01:56.295523 | orchestrator | 00:01:56.295 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-17 00:01:56.298358 | orchestrator | 00:01:56.298 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-17 00:01:56.302126 | orchestrator | 00:01:56.301 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-17 00:01:56.304485 | orchestrator | 00:01:56.304 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-17 00:01:56.305643 | orchestrator | 00:01:56.305 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-17 00:01:56.310619 | orchestrator | 00:01:56.310 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=8418afb8-253c-425b-831e-22a74c1bc47f] 2025-04-17 00:01:56.319367 | orchestrator | 00:01:56.319 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-17 00:01:56.319418 | orchestrator | 00:01:56.319 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-17 00:01:56.324989 | orchestrator | 00:01:56.324 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-17 00:01:56.441439 | orchestrator | 00:01:56.440 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=2608f06f-17e4-44a8-8021-87a3c3a790dc] 2025-04-17 00:01:56.448089 | orchestrator | 00:01:56.447 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-17 00:01:56.534693 | orchestrator | 00:01:56.534 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=7a755da0-577b-430a-a5a8-cab27ceb2642] 2025-04-17 00:01:56.538630 | orchestrator | 00:01:56.538 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=df7061fa-8895-4435-bdd2-756eb7fe8072] 2025-04-17 00:01:56.545725 | orchestrator | 00:01:56.545 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-17 00:01:56.550203 | orchestrator | 00:01:56.550 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-17 00:01:56.726326 | orchestrator | 00:01:56.725 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=a2a761d1-43ac-42b6-a28a-839308d0c26d] 2025-04-17 00:01:56.740632 | orchestrator | 00:01:56.740 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-17 00:01:56.809777 | orchestrator | 00:01:56.809 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=d0672808-018f-480b-9bbb-5813a1e73b5c] 2025-04-17 00:01:56.825003 | orchestrator | 00:01:56.824 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-17 00:01:56.964410 | orchestrator | 00:01:56.963 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=0e0ebe08-c282-463a-97ac-006170b2d1fe] 2025-04-17 00:01:56.975923 | orchestrator | 00:01:56.975 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-17 00:01:57.119232 | orchestrator | 00:01:57.118 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=55dcfdae-a8f2-4245-93fa-fb2171694f4e] 2025-04-17 00:01:57.125765 | orchestrator | 00:01:57.125 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-17 00:01:57.232225 | orchestrator | 00:01:57.231 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=3b9f370e-a406-4656-821c-6c14ff5f1b6d] 2025-04-17 00:01:57.431019 | orchestrator | 00:01:57.430 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=f9f330a6-a791-433b-b1cd-a95feb7cd13a] 2025-04-17 00:02:02.022629 | orchestrator | 00:02:02.022 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=58982994-dd60-4369-88da-6f5764f54406] 2025-04-17 00:02:02.182153 | orchestrator | 00:02:02.181 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=16ac224b-16d5-4e5e-8697-b25e28f8dd8d] 2025-04-17 00:02:02.205998 | orchestrator | 00:02:02.205 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=f7e45ef1-3323-4bfd-aa05-46c6aac5db31] 2025-04-17 00:02:02.514395 | orchestrator | 00:02:02.513 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=2211eaf3-57ce-43a6-9ee3-7f3964e36264] 2025-04-17 00:02:02.613731 | orchestrator | 00:02:02.613 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=35ec5557-34ff-4794-92ee-3ff62fcaed3b] 2025-04-17 00:02:02.622138 | orchestrator | 00:02:02.621 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=ee9fb66b-aa86-4a92-bd68-645dbc833233] 2025-04-17 00:02:02.788682 | orchestrator | 00:02:02.788 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=1b68d971-8f31-4c89-83fc-78c6fc768b1d] 2025-04-17 00:02:02.794644 | orchestrator | 00:02:02.794 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-17 00:02:02.844427 | orchestrator | 00:02:02.844 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=aca7850d-0fe3-4981-87de-312b710b5df6] 2025-04-17 00:02:02.867310 | orchestrator | 00:02:02.867 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-17 00:02:02.879183 | orchestrator | 00:02:02.878 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-17 00:02:02.880767 | orchestrator | 00:02:02.880 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-17 00:02:02.896721 | orchestrator | 00:02:02.880 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-17 00:02:02.896806 | orchestrator | 00:02:02.896 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-17 00:02:02.897817 | orchestrator | 00:02:02.897 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-17 00:02:09.226300 | orchestrator | 00:02:09.225 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=6c3e4af0-d40b-402b-8804-76f04e30066f] 2025-04-17 00:02:09.257141 | orchestrator | 00:02:09.256 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-17 00:02:09.261152 | orchestrator | 00:02:09.261 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-17 00:02:09.261494 | orchestrator | 00:02:09.261 STDOUT terraform: local_file.inventory: Creating... 2025-04-17 00:02:09.265449 | orchestrator | 00:02:09.265 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=5fc7fbdfb247075c43fbfe4598518feb83099664] 2025-04-17 00:02:09.266895 | orchestrator | 00:02:09.266 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=724e58d95c1aa0b006e7fba4acc9386271c6af5f] 2025-04-17 00:02:09.770316 | orchestrator | 00:02:09.769 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=6c3e4af0-d40b-402b-8804-76f04e30066f] 2025-04-17 00:02:12.871148 | orchestrator | 00:02:12.870 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-17 00:02:12.880463 | orchestrator | 00:02:12.880 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-17 00:02:12.881514 | orchestrator | 00:02:12.881 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-17 00:02:12.881646 | orchestrator | 00:02:12.881 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-17 00:02:12.899997 | orchestrator | 00:02:12.899 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-17 00:02:12.900116 | orchestrator | 00:02:12.899 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-17 00:02:17.105598 | orchestrator | 00:02:17.105 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 14s [id=d938a456-9578-4edb-88ea-d5be09e9d2d6] 2025-04-17 00:02:22.871445 | orchestrator | 00:02:22.871 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-17 00:02:22.880642 | orchestrator | 00:02:22.880 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-17 00:02:22.881845 | orchestrator | 00:02:22.881 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-17 00:02:22.900773 | orchestrator | 00:02:22.881 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-17 00:02:22.900789 | orchestrator | 00:02:22.900 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-17 00:02:23.340328 | orchestrator | 00:02:23.339 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=d9e7d0ce-f40d-4666-a5b6-e333b8fd0094] 2025-04-17 00:02:23.363267 | orchestrator | 00:02:23.362 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=fc08a0b7-c916-4b9c-901b-59355247b98f] 2025-04-17 00:02:23.382244 | orchestrator | 00:02:23.381 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=93165c4b-33c0-45d8-9a90-adc755f486b8] 2025-04-17 00:02:23.394984 | orchestrator | 00:02:23.394 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=1cbbbf99-0407-4b12-b03c-b61451b6e5ab] 2025-04-17 00:02:23.512178 | orchestrator | 00:02:23.511 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=bc0c06d2-212b-4d39-b527-cb545e389ba8] 2025-04-17 00:02:23.533359 | orchestrator | 00:02:23.532 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-17 00:02:23.534644 | orchestrator | 00:02:23.534 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3892307147685975434] 2025-04-17 00:02:23.542423 | orchestrator | 00:02:23.542 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-17 00:02:23.543773 | orchestrator | 00:02:23.543 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-17 00:02:23.559598 | orchestrator | 00:02:23.559 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-17 00:02:23.571990 | orchestrator | 00:02:23.571 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-17 00:02:23.575396 | orchestrator | 00:02:23.575 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-17 00:02:23.580678 | orchestrator | 00:02:23.580 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-17 00:02:23.586746 | orchestrator | 00:02:23.582 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-17 00:02:23.591440 | orchestrator | 00:02:23.591 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-17 00:02:23.600583 | orchestrator | 00:02:23.600 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-17 00:02:23.605049 | orchestrator | 00:02:23.604 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-17 00:02:28.884749 | orchestrator | 00:02:28.884 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=fc08a0b7-c916-4b9c-901b-59355247b98f/0367f9b0-3a71-47a7-a8bd-9e2816c4d242] 2025-04-17 00:02:28.895034 | orchestrator | 00:02:28.894 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-17 00:02:28.903698 | orchestrator | 00:02:28.903 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=93165c4b-33c0-45d8-9a90-adc755f486b8/42d2e0a2-f124-4e98-b4f2-6b7948e65700] 2025-04-17 00:02:28.916743 | orchestrator | 00:02:28.916 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-17 00:02:28.916920 | orchestrator | 00:02:28.916 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=d938a456-9578-4edb-88ea-d5be09e9d2d6/25683219-5c0a-4b96-92c9-99d674025eb1] 2025-04-17 00:02:28.924837 | orchestrator | 00:02:28.924 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-17 00:02:28.951157 | orchestrator | 00:02:28.950 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=fc08a0b7-c916-4b9c-901b-59355247b98f/e9224846-b1ba-4847-a73a-6715887089fb] 2025-04-17 00:02:28.952588 | orchestrator | 00:02:28.952 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=1cbbbf99-0407-4b12-b03c-b61451b6e5ab/95e37f14-95e8-4165-b353-fd53fdf52cdb] 2025-04-17 00:02:28.967953 | orchestrator | 00:02:28.967 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-17 00:02:28.969231 | orchestrator | 00:02:28.969 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-17 00:02:28.976985 | orchestrator | 00:02:28.976 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=d9e7d0ce-f40d-4666-a5b6-e333b8fd0094/7d7eac16-cb9a-452c-8088-f21cbc7102b1] 2025-04-17 00:02:28.978008 | orchestrator | 00:02:28.977 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=93165c4b-33c0-45d8-9a90-adc755f486b8/6309ce49-a4ed-4da7-82b1-29aa79f26650] 2025-04-17 00:02:28.980055 | orchestrator | 00:02:28.979 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=bc0c06d2-212b-4d39-b527-cb545e389ba8/2cc5c4f7-7927-43eb-bfd2-3f01b9eb04d9] 2025-04-17 00:02:28.991341 | orchestrator | 00:02:28.990 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=d9e7d0ce-f40d-4666-a5b6-e333b8fd0094/29eb77c3-a4eb-47de-bcfc-90cea0292ee8] 2025-04-17 00:02:28.994982 | orchestrator | 00:02:28.994 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=fc08a0b7-c916-4b9c-901b-59355247b98f/8bcc068e-17b6-4e9f-accd-8ac12579d6f0] 2025-04-17 00:02:28.995812 | orchestrator | 00:02:28.995 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-17 00:02:29.002122 | orchestrator | 00:02:29.001 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-17 00:02:29.011453 | orchestrator | 00:02:29.001 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-17 00:02:29.011521 | orchestrator | 00:02:29.011 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-17 00:02:34.209670 | orchestrator | 00:02:34.209 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=1cbbbf99-0407-4b12-b03c-b61451b6e5ab/c189cae0-1e0d-4eb8-9970-e970e21b9a89] 2025-04-17 00:02:34.239803 | orchestrator | 00:02:34.239 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=bc0c06d2-212b-4d39-b527-cb545e389ba8/d2310849-ed6c-49c2-b9e0-f9c06c6339c9] 2025-04-17 00:02:34.247578 | orchestrator | 00:02:34.247 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=d938a456-9578-4edb-88ea-d5be09e9d2d6/70c2d06b-89ef-4a1b-882c-e0d752f0d1e2] 2025-04-17 00:02:34.268217 | orchestrator | 00:02:34.267 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=d9e7d0ce-f40d-4666-a5b6-e333b8fd0094/6e4fe9eb-5e43-4aa2-9b37-d2398fe01f7b] 2025-04-17 00:02:34.299567 | orchestrator | 00:02:34.298 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=d938a456-9578-4edb-88ea-d5be09e9d2d6/d0102145-e326-42a8-9189-9b289697f2f1] 2025-04-17 00:02:34.317534 | orchestrator | 00:02:34.317 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=1cbbbf99-0407-4b12-b03c-b61451b6e5ab/bef8d693-736b-4549-b698-ce9e87082908] 2025-04-17 00:02:34.329117 | orchestrator | 00:02:34.328 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=93165c4b-33c0-45d8-9a90-adc755f486b8/c4c813ed-e09b-49ac-b96f-625695efceb2] 2025-04-17 00:02:34.351884 | orchestrator | 00:02:34.351 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=bc0c06d2-212b-4d39-b527-cb545e389ba8/19544825-ba43-4bb1-8c25-64db59cc98e2] 2025-04-17 00:02:39.014436 | orchestrator | 00:02:39.013 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-17 00:02:49.016992 | orchestrator | 00:02:49.016 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-04-17 00:02:49.631266 | orchestrator | 00:02:49.630 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=724c11e0-4773-4175-b2d2-3f9994bbafee] 2025-04-17 00:02:49.653890 | orchestrator | 00:02:49.653 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-17 00:02:49.653954 | orchestrator | 00:02:49.653 STDOUT terraform: Outputs: 2025-04-17 00:02:49.653971 | orchestrator | 00:02:49.653 STDOUT terraform: manager_address = 2025-04-17 00:02:49.661966 | orchestrator | 00:02:49.653 STDOUT terraform: private_key = 2025-04-17 00:02:59.759335 | orchestrator | changed 2025-04-17 00:02:59.797814 | 2025-04-17 00:02:59.797943 | TASK [Fetch manager address] 2025-04-17 00:03:00.281846 | orchestrator | ok 2025-04-17 00:03:00.293939 | 2025-04-17 00:03:00.294112 | TASK [Set manager_host address] 2025-04-17 00:03:00.405624 | orchestrator | ok 2025-04-17 00:03:00.416774 | 2025-04-17 00:03:00.416908 | LOOP [Update ansible collections] 2025-04-17 00:03:01.273073 | orchestrator | changed 2025-04-17 00:03:02.014478 | orchestrator | changed 2025-04-17 00:03:02.040772 | 2025-04-17 00:03:02.041053 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-17 00:03:12.909275 | orchestrator | ok 2025-04-17 00:03:12.921470 | 2025-04-17 00:03:12.921575 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-17 00:04:12.979763 | orchestrator | ok 2025-04-17 00:04:12.996262 | 2025-04-17 00:04:12.996447 | TASK [Fetch manager ssh hostkey] 2025-04-17 00:04:14.094897 | orchestrator | Output suppressed because no_log was given 2025-04-17 00:04:14.110445 | 2025-04-17 00:04:14.110592 | TASK [Get ssh keypair from terraform environment] 2025-04-17 00:04:14.658008 | orchestrator | changed 2025-04-17 00:04:14.675345 | 2025-04-17 00:04:14.675495 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-17 00:04:14.725500 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-17 00:04:14.736210 | 2025-04-17 00:04:14.736332 | TASK [Run manager part 0] 2025-04-17 00:04:15.553893 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-17 00:04:15.592253 | orchestrator | 2025-04-17 00:04:17.264481 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-17 00:04:17.264543 | orchestrator | 2025-04-17 00:04:17.264565 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-17 00:04:17.264582 | orchestrator | ok: [testbed-manager] 2025-04-17 00:04:19.072930 | orchestrator | 2025-04-17 00:04:19.072982 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-17 00:04:19.072993 | orchestrator | 2025-04-17 00:04:19.072999 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:04:19.073010 | orchestrator | ok: [testbed-manager] 2025-04-17 00:04:19.657876 | orchestrator | 2025-04-17 00:04:19.657919 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-17 00:04:19.657933 | orchestrator | ok: [testbed-manager] 2025-04-17 00:04:19.717179 | orchestrator | 2025-04-17 00:04:19.717234 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-17 00:04:19.717253 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:04:19.749073 | orchestrator | 2025-04-17 00:04:19.749129 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-17 00:04:19.749144 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:04:19.772015 | orchestrator | 2025-04-17 00:04:19.772098 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-17 00:04:19.772112 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:04:19.793509 | orchestrator | 2025-04-17 00:04:19.793555 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-17 00:04:19.793569 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:04:19.816028 | orchestrator | 2025-04-17 00:04:19.816071 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-17 00:04:19.816086 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:04:19.839498 | orchestrator | 2025-04-17 00:04:19.839543 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-17 00:04:19.839556 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:04:19.862703 | orchestrator | 2025-04-17 00:04:19.862747 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-17 00:04:19.862761 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:04:20.683971 | orchestrator | 2025-04-17 00:04:20.684055 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-17 00:04:20.684077 | orchestrator | changed: [testbed-manager] 2025-04-17 00:07:41.087015 | orchestrator | 2025-04-17 00:07:41.088807 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-17 00:07:41.090403 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:00.086562 | orchestrator | 2025-04-17 00:09:00.086697 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-17 00:09:00.086732 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:25.435225 | orchestrator | 2025-04-17 00:09:25.435376 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-17 00:09:25.435418 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:36.481462 | orchestrator | 2025-04-17 00:09:36.481640 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-17 00:09:36.481691 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:36.532811 | orchestrator | 2025-04-17 00:09:36.532925 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-17 00:09:36.532971 | orchestrator | ok: [testbed-manager] 2025-04-17 00:09:37.349841 | orchestrator | 2025-04-17 00:09:37.349909 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-17 00:09:37.349928 | orchestrator | ok: [testbed-manager] 2025-04-17 00:09:39.867006 | orchestrator | 2025-04-17 00:09:39.867137 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-17 00:09:39.868676 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:46.738451 | orchestrator | 2025-04-17 00:09:46.738554 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-17 00:09:46.738582 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:54.692241 | orchestrator | 2025-04-17 00:09:54.692343 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-17 00:09:54.692411 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:57.369135 | orchestrator | 2025-04-17 00:09:57.369204 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-17 00:09:57.369225 | orchestrator | changed: [testbed-manager] 2025-04-17 00:09:59.234259 | orchestrator | 2025-04-17 00:09:59.234329 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-17 00:09:59.234354 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:00.463367 | orchestrator | 2025-04-17 00:10:00.463527 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-17 00:10:00.463569 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-17 00:10:00.507121 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-17 00:10:00.507240 | orchestrator | 2025-04-17 00:10:00.507263 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-17 00:10:00.507297 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-17 00:10:03.989576 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-17 00:10:03.989726 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-17 00:10:03.989746 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-17 00:10:03.989780 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-17 00:10:04.580564 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-17 00:10:04.580701 | orchestrator | 2025-04-17 00:10:04.580715 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-17 00:10:04.580745 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:27.635529 | orchestrator | 2025-04-17 00:10:27.635715 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-17 00:10:27.635757 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-17 00:10:30.076675 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-17 00:10:30.076796 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-17 00:10:30.076818 | orchestrator | 2025-04-17 00:10:30.076838 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-17 00:10:30.076872 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-17 00:10:31.539616 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-17 00:10:31.539724 | orchestrator | 2025-04-17 00:10:31.539739 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-17 00:10:31.539752 | orchestrator | 2025-04-17 00:10:31.539764 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:10:31.539790 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:31.582091 | orchestrator | 2025-04-17 00:10:31.582209 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-17 00:10:31.582244 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:31.644333 | orchestrator | 2025-04-17 00:10:31.644506 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-17 00:10:31.644543 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:32.468162 | orchestrator | 2025-04-17 00:10:32.468855 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-17 00:10:32.468907 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:33.205717 | orchestrator | 2025-04-17 00:10:33.205839 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-17 00:10:33.205877 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:34.698181 | orchestrator | 2025-04-17 00:10:34.698283 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-17 00:10:34.698318 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-17 00:10:36.159847 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-17 00:10:36.159977 | orchestrator | 2025-04-17 00:10:36.159998 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-17 00:10:36.160031 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:37.962152 | orchestrator | 2025-04-17 00:10:37.962323 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-17 00:10:37.962364 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-17 00:10:38.627186 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-17 00:10:38.627309 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-17 00:10:38.627332 | orchestrator | 2025-04-17 00:10:38.627348 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-17 00:10:38.627402 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:38.700459 | orchestrator | 2025-04-17 00:10:38.700588 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-17 00:10:38.700625 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:10:39.612316 | orchestrator | 2025-04-17 00:10:39.612450 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-17 00:10:39.612484 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:10:39.650567 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:39.650625 | orchestrator | 2025-04-17 00:10:39.650641 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-17 00:10:39.650666 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:10:39.680124 | orchestrator | 2025-04-17 00:10:39.680198 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-17 00:10:39.680224 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:10:39.709444 | orchestrator | 2025-04-17 00:10:39.709543 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-17 00:10:39.709573 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:10:39.762706 | orchestrator | 2025-04-17 00:10:39.762790 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-17 00:10:39.762819 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:10:40.554567 | orchestrator | 2025-04-17 00:10:40.554709 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-17 00:10:40.554755 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:42.073585 | orchestrator | 2025-04-17 00:10:42.073708 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-17 00:10:42.073730 | orchestrator | 2025-04-17 00:10:42.073745 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:10:42.073777 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:43.077020 | orchestrator | 2025-04-17 00:10:43.077143 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-17 00:10:43.077191 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:43.226320 | orchestrator | 2025-04-17 00:10:43.226931 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:10:43.226962 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-17 00:10:43.226983 | orchestrator | 2025-04-17 00:10:43.568123 | orchestrator | changed 2025-04-17 00:10:43.589674 | 2025-04-17 00:10:43.589819 | TASK [Point out that the log in on the manager is now possible] 2025-04-17 00:10:43.640470 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-17 00:10:43.651767 | 2025-04-17 00:10:43.651876 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-17 00:10:43.702093 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-17 00:10:43.712338 | 2025-04-17 00:10:43.712473 | TASK [Run manager part 1 + 2] 2025-04-17 00:10:44.585444 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-17 00:10:44.644359 | orchestrator | 2025-04-17 00:10:47.203137 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-17 00:10:47.203282 | orchestrator | 2025-04-17 00:10:47.203305 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:10:47.203337 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:47.228246 | orchestrator | 2025-04-17 00:10:47.228349 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-17 00:10:47.228395 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:10:47.260691 | orchestrator | 2025-04-17 00:10:47.260818 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-17 00:10:47.260860 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:47.305021 | orchestrator | 2025-04-17 00:10:47.305149 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-17 00:10:47.305205 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:47.371025 | orchestrator | 2025-04-17 00:10:47.371159 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-17 00:10:47.371202 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:47.471188 | orchestrator | 2025-04-17 00:10:47.471331 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-17 00:10:47.471398 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:47.531251 | orchestrator | 2025-04-17 00:10:47.531412 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-17 00:10:47.531451 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-17 00:10:48.265397 | orchestrator | 2025-04-17 00:10:48.265546 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-17 00:10:48.265597 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:48.309280 | orchestrator | 2025-04-17 00:10:48.309435 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-17 00:10:48.309473 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:10:49.755352 | orchestrator | 2025-04-17 00:10:49.755494 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-17 00:10:49.755542 | orchestrator | changed: [testbed-manager] 2025-04-17 00:10:50.356509 | orchestrator | 2025-04-17 00:10:50.356583 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-17 00:10:50.356601 | orchestrator | ok: [testbed-manager] 2025-04-17 00:10:51.560805 | orchestrator | 2025-04-17 00:10:51.561063 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-17 00:10:51.561104 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:04.691900 | orchestrator | 2025-04-17 00:11:04.692002 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-17 00:11:04.692036 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:05.324965 | orchestrator | 2025-04-17 00:11:05.325047 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-17 00:11:05.325074 | orchestrator | ok: [testbed-manager] 2025-04-17 00:11:05.379400 | orchestrator | 2025-04-17 00:11:05.379498 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-17 00:11:05.379531 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:11:06.261403 | orchestrator | 2025-04-17 00:11:06.261503 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-17 00:11:06.261538 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:07.114992 | orchestrator | 2025-04-17 00:11:07.115040 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-17 00:11:07.115055 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:07.627121 | orchestrator | 2025-04-17 00:11:07.627163 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-17 00:11:07.627178 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:07.665400 | orchestrator | 2025-04-17 00:11:07.665484 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-17 00:11:07.665517 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-17 00:11:09.935903 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-17 00:11:09.935971 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-17 00:11:09.935981 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-17 00:11:09.935998 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:19.230165 | orchestrator | 2025-04-17 00:11:19.230298 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-17 00:11:19.230364 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-17 00:11:20.260688 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-17 00:11:20.261531 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-17 00:11:20.261557 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-17 00:11:20.261576 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-17 00:11:20.261592 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-17 00:11:20.261608 | orchestrator | 2025-04-17 00:11:20.261625 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-17 00:11:20.261672 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:20.315779 | orchestrator | 2025-04-17 00:11:20.315908 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-17 00:11:20.315957 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:11:23.534887 | orchestrator | 2025-04-17 00:11:23.534965 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-17 00:11:23.534989 | orchestrator | changed: [testbed-manager] 2025-04-17 00:11:23.576640 | orchestrator | 2025-04-17 00:11:23.576714 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-17 00:11:23.576734 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:12:55.997571 | orchestrator | 2025-04-17 00:12:55.997744 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-17 00:12:55.997783 | orchestrator | changed: [testbed-manager] 2025-04-17 00:12:57.136411 | orchestrator | 2025-04-17 00:12:57.136532 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-17 00:12:57.136570 | orchestrator | ok: [testbed-manager] 2025-04-17 00:12:57.261464 | orchestrator | 2025-04-17 00:12:57.261594 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:12:57.261619 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-17 00:12:57.261634 | orchestrator | 2025-04-17 00:12:57.347117 | orchestrator | changed 2025-04-17 00:12:57.364902 | 2025-04-17 00:12:57.365050 | TASK [Reboot manager] 2025-04-17 00:12:58.909485 | orchestrator | changed 2025-04-17 00:12:58.929226 | 2025-04-17 00:12:58.929377 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-17 00:13:13.300745 | orchestrator | ok 2025-04-17 00:13:13.310089 | 2025-04-17 00:13:13.310217 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-17 00:14:13.359651 | orchestrator | ok 2025-04-17 00:14:13.371081 | 2025-04-17 00:14:13.371224 | TASK [Deploy manager + bootstrap nodes] 2025-04-17 00:14:15.930245 | orchestrator | 2025-04-17 00:14:15.933869 | orchestrator | # DEPLOY MANAGER 2025-04-17 00:14:15.933912 | orchestrator | 2025-04-17 00:14:15.933931 | orchestrator | + set -e 2025-04-17 00:14:15.933977 | orchestrator | + echo 2025-04-17 00:14:15.933996 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-17 00:14:15.934014 | orchestrator | + echo 2025-04-17 00:14:15.934087 | orchestrator | + cat /opt/manager-vars.sh 2025-04-17 00:14:15.934155 | orchestrator | export NUMBER_OF_NODES=6 2025-04-17 00:14:15.934354 | orchestrator | 2025-04-17 00:14:15.934377 | orchestrator | export CEPH_VERSION=quincy 2025-04-17 00:14:15.934392 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-17 00:14:15.934406 | orchestrator | export MANAGER_VERSION=8.1.0 2025-04-17 00:14:15.934420 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-17 00:14:15.934434 | orchestrator | 2025-04-17 00:14:15.934449 | orchestrator | export ARA=false 2025-04-17 00:14:15.934464 | orchestrator | export TEMPEST=false 2025-04-17 00:14:15.934478 | orchestrator | export IS_ZUUL=true 2025-04-17 00:14:15.934493 | orchestrator | 2025-04-17 00:14:15.934506 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.47 2025-04-17 00:14:15.934521 | orchestrator | export EXTERNAL_API=false 2025-04-17 00:14:15.934535 | orchestrator | 2025-04-17 00:14:15.934549 | orchestrator | export IMAGE_USER=ubuntu 2025-04-17 00:14:15.934563 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-17 00:14:15.934577 | orchestrator | 2025-04-17 00:14:15.934591 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-17 00:14:15.934610 | orchestrator | 2025-04-17 00:14:15.935305 | orchestrator | + echo 2025-04-17 00:14:15.935329 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-17 00:14:15.935352 | orchestrator | ++ export INTERACTIVE=false 2025-04-17 00:14:15.936068 | orchestrator | ++ INTERACTIVE=false 2025-04-17 00:14:15.936190 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-17 00:14:15.936216 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-17 00:14:15.936241 | orchestrator | + source /opt/manager-vars.sh 2025-04-17 00:14:15.936320 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-17 00:14:15.936333 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-17 00:14:15.936343 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-17 00:14:15.936352 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-17 00:14:15.936362 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-17 00:14:15.936372 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-17 00:14:15.936391 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-17 00:14:15.936401 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-17 00:14:15.936410 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-17 00:14:15.936420 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-17 00:14:15.936429 | orchestrator | ++ export ARA=false 2025-04-17 00:14:15.936439 | orchestrator | ++ ARA=false 2025-04-17 00:14:15.936448 | orchestrator | ++ export TEMPEST=false 2025-04-17 00:14:15.936457 | orchestrator | ++ TEMPEST=false 2025-04-17 00:14:15.936467 | orchestrator | ++ export IS_ZUUL=true 2025-04-17 00:14:15.936476 | orchestrator | ++ IS_ZUUL=true 2025-04-17 00:14:15.936485 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.47 2025-04-17 00:14:15.936495 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.47 2025-04-17 00:14:15.936510 | orchestrator | ++ export EXTERNAL_API=false 2025-04-17 00:14:15.936520 | orchestrator | ++ EXTERNAL_API=false 2025-04-17 00:14:15.936532 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-17 00:14:15.997597 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-17 00:14:15.997710 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-17 00:14:15.997727 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-17 00:14:15.997752 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-17 00:14:15.997767 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-17 00:14:15.997783 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-17 00:14:15.997824 | orchestrator | + docker version 2025-04-17 00:14:16.246074 | orchestrator | Client: Docker Engine - Community 2025-04-17 00:14:16.249172 | orchestrator | Version: 26.1.4 2025-04-17 00:14:16.249231 | orchestrator | API version: 1.45 2025-04-17 00:14:16.249241 | orchestrator | Go version: go1.21.11 2025-04-17 00:14:16.249250 | orchestrator | Git commit: 5650f9b 2025-04-17 00:14:16.249259 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-17 00:14:16.249269 | orchestrator | OS/Arch: linux/amd64 2025-04-17 00:14:16.249278 | orchestrator | Context: default 2025-04-17 00:14:16.249287 | orchestrator | 2025-04-17 00:14:16.249296 | orchestrator | Server: Docker Engine - Community 2025-04-17 00:14:16.249305 | orchestrator | Engine: 2025-04-17 00:14:16.249314 | orchestrator | Version: 26.1.4 2025-04-17 00:14:16.249322 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-04-17 00:14:16.249331 | orchestrator | Go version: go1.21.11 2025-04-17 00:14:16.249342 | orchestrator | Git commit: de5c9cf 2025-04-17 00:14:16.249378 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-17 00:14:16.249387 | orchestrator | OS/Arch: linux/amd64 2025-04-17 00:14:16.249396 | orchestrator | Experimental: false 2025-04-17 00:14:16.249404 | orchestrator | containerd: 2025-04-17 00:14:16.249413 | orchestrator | Version: 1.7.27 2025-04-17 00:14:16.249422 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-17 00:14:16.249431 | orchestrator | runc: 2025-04-17 00:14:16.249440 | orchestrator | Version: 1.2.5 2025-04-17 00:14:16.249448 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-17 00:14:16.249457 | orchestrator | docker-init: 2025-04-17 00:14:16.249465 | orchestrator | Version: 0.19.0 2025-04-17 00:14:16.249474 | orchestrator | GitCommit: de40ad0 2025-04-17 00:14:16.249492 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-17 00:14:16.257300 | orchestrator | + set -e 2025-04-17 00:14:16.257454 | orchestrator | + source /opt/manager-vars.sh 2025-04-17 00:14:16.257495 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-17 00:14:16.257520 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-17 00:14:16.257544 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-17 00:14:16.257566 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-17 00:14:16.257588 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-17 00:14:16.257611 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-17 00:14:16.257634 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-17 00:14:16.257657 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-17 00:14:16.257681 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-17 00:14:16.257705 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-17 00:14:16.257728 | orchestrator | ++ export ARA=false 2025-04-17 00:14:16.257750 | orchestrator | ++ ARA=false 2025-04-17 00:14:16.257773 | orchestrator | ++ export TEMPEST=false 2025-04-17 00:14:16.257796 | orchestrator | ++ TEMPEST=false 2025-04-17 00:14:16.257819 | orchestrator | ++ export IS_ZUUL=true 2025-04-17 00:14:16.257842 | orchestrator | ++ IS_ZUUL=true 2025-04-17 00:14:16.257866 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.47 2025-04-17 00:14:16.257890 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.47 2025-04-17 00:14:16.257915 | orchestrator | ++ export EXTERNAL_API=false 2025-04-17 00:14:16.257972 | orchestrator | ++ EXTERNAL_API=false 2025-04-17 00:14:16.257994 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-17 00:14:16.258078 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-17 00:14:16.258109 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-17 00:14:16.258191 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-17 00:14:16.258218 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-17 00:14:16.258242 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-17 00:14:16.258261 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-17 00:14:16.258287 | orchestrator | ++ export INTERACTIVE=false 2025-04-17 00:14:16.258308 | orchestrator | ++ INTERACTIVE=false 2025-04-17 00:14:16.258324 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-17 00:14:16.258339 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-17 00:14:16.258364 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-17 00:14:16.264827 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-04-17 00:14:16.264902 | orchestrator | + set -e 2025-04-17 00:14:16.270949 | orchestrator | + VERSION=8.1.0 2025-04-17 00:14:16.271016 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-04-17 00:14:16.271061 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-17 00:14:16.273970 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-17 00:14:16.274083 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-17 00:14:16.279075 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-04-17 00:14:16.286267 | orchestrator | /opt/configuration ~ 2025-04-17 00:14:16.288481 | orchestrator | + set -e 2025-04-17 00:14:16.288507 | orchestrator | + pushd /opt/configuration 2025-04-17 00:14:16.288522 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-17 00:14:16.288542 | orchestrator | + source /opt/venv/bin/activate 2025-04-17 00:14:16.289646 | orchestrator | ++ deactivate nondestructive 2025-04-17 00:14:16.289779 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:16.289798 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:16.289822 | orchestrator | ++ hash -r 2025-04-17 00:14:16.289837 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:16.289850 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-17 00:14:16.289865 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-17 00:14:16.289879 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-17 00:14:16.289926 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-17 00:14:16.290095 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-17 00:14:16.290144 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-17 00:14:16.290163 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-17 00:14:16.290179 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-17 00:14:16.290194 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-17 00:14:16.290212 | orchestrator | ++ export PATH 2025-04-17 00:14:17.367426 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:17.367567 | orchestrator | ++ '[' -z '' ']' 2025-04-17 00:14:17.367590 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-17 00:14:17.367616 | orchestrator | ++ PS1='(venv) ' 2025-04-17 00:14:17.367641 | orchestrator | ++ export PS1 2025-04-17 00:14:17.367664 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-17 00:14:17.367686 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-17 00:14:17.367710 | orchestrator | ++ hash -r 2025-04-17 00:14:17.367736 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-04-17 00:14:17.367783 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-04-17 00:14:17.368370 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-04-17 00:14:17.369446 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-04-17 00:14:17.370614 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-04-17 00:14:17.371793 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-04-17 00:14:17.381454 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-04-17 00:14:17.382835 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-04-17 00:14:17.383869 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-04-17 00:14:17.385382 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-04-17 00:14:17.414661 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-04-17 00:14:17.415819 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-04-17 00:14:17.417458 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-04-17 00:14:17.418881 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-04-17 00:14:17.422771 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-04-17 00:14:17.630685 | orchestrator | ++ which gilt 2025-04-17 00:14:17.635349 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-04-17 00:14:17.860568 | orchestrator | + /opt/venv/bin/gilt overlay 2025-04-17 00:14:17.860704 | orchestrator | osism.cfg-generics: 2025-04-17 00:14:19.361916 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-04-17 00:14:19.362169 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-04-17 00:14:19.362253 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-04-17 00:14:19.362291 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-04-17 00:14:20.244560 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-04-17 00:14:20.244705 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-04-17 00:14:20.255800 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-04-17 00:14:20.553061 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-04-17 00:14:20.602525 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-17 00:14:20.604354 | orchestrator | + deactivate 2025-04-17 00:14:20.604526 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-17 00:14:20.604549 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-17 00:14:20.604564 | orchestrator | + export PATH 2025-04-17 00:14:20.604587 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-17 00:14:20.604603 | orchestrator | + '[' -n '' ']' 2025-04-17 00:14:20.604627 | orchestrator | + hash -r 2025-04-17 00:14:20.604643 | orchestrator | + '[' -n '' ']' 2025-04-17 00:14:20.604657 | orchestrator | + unset VIRTUAL_ENV 2025-04-17 00:14:20.604672 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-17 00:14:20.604687 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-17 00:14:20.604701 | orchestrator | + unset -f deactivate 2025-04-17 00:14:20.604720 | orchestrator | + popd 2025-04-17 00:14:20.604735 | orchestrator | ~ 2025-04-17 00:14:20.604762 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-17 00:14:20.604835 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-17 00:14:20.604856 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-17 00:14:20.663697 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-17 00:14:20.696250 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-17 00:14:20.696334 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-17 00:14:20.696367 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-17 00:14:20.696603 | orchestrator | + source /opt/venv/bin/activate 2025-04-17 00:14:20.696631 | orchestrator | ++ deactivate nondestructive 2025-04-17 00:14:20.696646 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:20.696661 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:20.696687 | orchestrator | ++ hash -r 2025-04-17 00:14:20.696708 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:20.696793 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-17 00:14:20.696812 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-17 00:14:20.696833 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-17 00:14:20.696847 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-17 00:14:20.696861 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-17 00:14:20.696875 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-17 00:14:20.696890 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-17 00:14:20.696905 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-17 00:14:20.696920 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-17 00:14:20.696934 | orchestrator | ++ export PATH 2025-04-17 00:14:20.696952 | orchestrator | ++ '[' -n '' ']' 2025-04-17 00:14:20.697053 | orchestrator | ++ '[' -z '' ']' 2025-04-17 00:14:20.697071 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-17 00:14:20.697089 | orchestrator | ++ PS1='(venv) ' 2025-04-17 00:14:21.739371 | orchestrator | ++ export PS1 2025-04-17 00:14:21.739499 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-17 00:14:21.739517 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-17 00:14:21.739534 | orchestrator | ++ hash -r 2025-04-17 00:14:21.739548 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-17 00:14:21.739578 | orchestrator | 2025-04-17 00:14:22.317202 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-17 00:14:22.317334 | orchestrator | 2025-04-17 00:14:22.317353 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-17 00:14:22.317387 | orchestrator | ok: [testbed-manager] 2025-04-17 00:14:23.304979 | orchestrator | 2025-04-17 00:14:23.305178 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-17 00:14:23.305222 | orchestrator | changed: [testbed-manager] 2025-04-17 00:14:25.663802 | orchestrator | 2025-04-17 00:14:25.663942 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-17 00:14:25.663963 | orchestrator | 2025-04-17 00:14:25.663978 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:14:25.664011 | orchestrator | ok: [testbed-manager] 2025-04-17 00:14:30.829482 | orchestrator | 2025-04-17 00:14:30.829640 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-17 00:14:30.829719 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-17 00:15:46.933452 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-04-17 00:15:46.934938 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-04-17 00:15:46.935003 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-04-17 00:15:46.935011 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-04-17 00:15:46.935018 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-04-17 00:15:46.935024 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-04-17 00:15:46.935030 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-04-17 00:15:46.935035 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-04-17 00:15:46.935048 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-04-17 00:15:46.935054 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-04-17 00:15:46.935060 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-04-17 00:15:46.935065 | orchestrator | 2025-04-17 00:15:46.935071 | orchestrator | TASK [Check status] ************************************************************ 2025-04-17 00:15:46.935127 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-17 00:15:46.983662 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-17 00:15:46.983803 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-17 00:15:46.983819 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-17 00:15:46.983835 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j34602946482.1588', 'results_file': '/home/dragon/.ansible_async/j34602946482.1588', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.983908 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j564410708884.1613', 'results_file': '/home/dragon/.ansible_async/j564410708884.1613', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.983923 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-17 00:15:46.983936 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-17 00:15:46.983949 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j415747991672.1638', 'results_file': '/home/dragon/.ansible_async/j415747991672.1638', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.983969 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j400020385722.1670', 'results_file': '/home/dragon/.ansible_async/j400020385722.1670', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.983989 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j610020705877.1703', 'results_file': '/home/dragon/.ansible_async/j610020705877.1703', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984002 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j707569188562.1742', 'results_file': '/home/dragon/.ansible_async/j707569188562.1742', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984014 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-17 00:15:46.984027 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j359746848732.1776', 'results_file': '/home/dragon/.ansible_async/j359746848732.1776', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984122 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j3991251231.1801', 'results_file': '/home/dragon/.ansible_async/j3991251231.1801', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984138 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j462874383076.1833', 'results_file': '/home/dragon/.ansible_async/j462874383076.1833', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984151 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j358729431546.1871', 'results_file': '/home/dragon/.ansible_async/j358729431546.1871', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984164 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j841995628028.1898', 'results_file': '/home/dragon/.ansible_async/j841995628028.1898', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984177 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j188980223331.1931', 'results_file': '/home/dragon/.ansible_async/j188980223331.1931', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-04-17 00:15:46.984191 | orchestrator | 2025-04-17 00:15:46.984207 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-17 00:15:46.984240 | orchestrator | ok: [testbed-manager] 2025-04-17 00:15:47.436507 | orchestrator | 2025-04-17 00:15:47.436675 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-17 00:15:47.436715 | orchestrator | changed: [testbed-manager] 2025-04-17 00:15:47.756661 | orchestrator | 2025-04-17 00:15:47.756811 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-17 00:15:47.756848 | orchestrator | changed: [testbed-manager] 2025-04-17 00:15:48.080854 | orchestrator | 2025-04-17 00:15:48.081005 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-17 00:15:48.081044 | orchestrator | changed: [testbed-manager] 2025-04-17 00:15:48.140419 | orchestrator | 2025-04-17 00:15:48.140572 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-17 00:15:48.140605 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:15:48.483557 | orchestrator | 2025-04-17 00:15:48.483703 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-17 00:15:48.483741 | orchestrator | ok: [testbed-manager] 2025-04-17 00:15:48.597840 | orchestrator | 2025-04-17 00:15:48.597984 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-17 00:15:48.598167 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:15:50.379903 | orchestrator | 2025-04-17 00:15:50.380060 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-17 00:15:50.380132 | orchestrator | 2025-04-17 00:15:50.380149 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:15:50.380183 | orchestrator | ok: [testbed-manager] 2025-04-17 00:15:50.470371 | orchestrator | 2025-04-17 00:15:50.470516 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-17 00:15:50.470552 | orchestrator | included: osism.services.traefik for testbed-manager 2025-04-17 00:15:50.526817 | orchestrator | 2025-04-17 00:15:50.526929 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-17 00:15:50.526965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-17 00:15:51.612707 | orchestrator | 2025-04-17 00:15:51.612897 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-17 00:15:51.612939 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-17 00:15:53.400844 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-17 00:15:53.401011 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-17 00:15:53.401030 | orchestrator | 2025-04-17 00:15:53.401046 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-17 00:15:53.401141 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-17 00:15:54.021661 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-17 00:15:54.021815 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-17 00:15:54.021836 | orchestrator | 2025-04-17 00:15:54.021852 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-17 00:15:54.021887 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:15:54.663498 | orchestrator | changed: [testbed-manager] 2025-04-17 00:15:54.663659 | orchestrator | 2025-04-17 00:15:54.663707 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-17 00:15:54.663759 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:15:54.734405 | orchestrator | changed: [testbed-manager] 2025-04-17 00:15:54.734550 | orchestrator | 2025-04-17 00:15:54.734570 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-17 00:15:54.734605 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:15:55.105175 | orchestrator | 2025-04-17 00:15:55.105330 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-17 00:15:55.105369 | orchestrator | ok: [testbed-manager] 2025-04-17 00:15:55.167851 | orchestrator | 2025-04-17 00:15:55.168001 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-17 00:15:55.168041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-17 00:15:56.179823 | orchestrator | 2025-04-17 00:15:56.179989 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-17 00:15:56.180048 | orchestrator | changed: [testbed-manager] 2025-04-17 00:15:57.038185 | orchestrator | 2025-04-17 00:15:57.038338 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-17 00:15:57.038379 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:00.068531 | orchestrator | 2025-04-17 00:16:00.068698 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-17 00:16:00.068739 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:00.241723 | orchestrator | 2025-04-17 00:16:00.241973 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-17 00:16:00.242140 | orchestrator | included: osism.services.netbox for testbed-manager 2025-04-17 00:16:00.314730 | orchestrator | 2025-04-17 00:16:00.314847 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-17 00:16:00.314871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-17 00:16:02.773813 | orchestrator | 2025-04-17 00:16:02.773973 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-17 00:16:02.774013 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:02.868944 | orchestrator | 2025-04-17 00:16:02.869141 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-17 00:16:02.869183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-17 00:16:03.968319 | orchestrator | 2025-04-17 00:16:03.968482 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-17 00:16:03.968523 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-17 00:16:04.044725 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-17 00:16:04.044873 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-17 00:16:04.044890 | orchestrator | 2025-04-17 00:16:04.044906 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-17 00:16:04.044975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-17 00:16:04.709928 | orchestrator | 2025-04-17 00:16:04.710186 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-17 00:16:04.710228 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-17 00:16:05.351787 | orchestrator | 2025-04-17 00:16:05.351970 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-17 00:16:05.352028 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:06.005417 | orchestrator | 2025-04-17 00:16:06.005556 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-17 00:16:06.005595 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:16:06.411320 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:06.411441 | orchestrator | 2025-04-17 00:16:06.411466 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-17 00:16:06.411506 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:06.751704 | orchestrator | 2025-04-17 00:16:06.751837 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-17 00:16:06.751873 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:06.806329 | orchestrator | 2025-04-17 00:16:06.806439 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-17 00:16:06.806473 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:07.446941 | orchestrator | 2025-04-17 00:16:07.447144 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-17 00:16:07.447188 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:07.526975 | orchestrator | 2025-04-17 00:16:07.527138 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-17 00:16:07.527177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-17 00:16:08.285683 | orchestrator | 2025-04-17 00:16:08.285838 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-17 00:16:08.285878 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-17 00:16:08.928629 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-17 00:16:08.928829 | orchestrator | 2025-04-17 00:16:08.928856 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-17 00:16:08.928908 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-17 00:16:09.561655 | orchestrator | 2025-04-17 00:16:09.561914 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-17 00:16:09.561975 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:09.615364 | orchestrator | 2025-04-17 00:16:09.615514 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-17 00:16:09.615565 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:10.255397 | orchestrator | 2025-04-17 00:16:10.255521 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-17 00:16:10.255552 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:12.085027 | orchestrator | 2025-04-17 00:16:12.085188 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-17 00:16:12.085224 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:16:17.906570 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:16:17.906715 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:16:17.906736 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:17.906754 | orchestrator | 2025-04-17 00:16:17.906770 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-17 00:16:17.906804 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-17 00:16:18.536543 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-17 00:16:18.536636 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-17 00:16:18.536644 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-17 00:16:18.536650 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-17 00:16:18.536657 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-17 00:16:18.536686 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-17 00:16:18.536692 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-17 00:16:18.536698 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-17 00:16:18.536704 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-17 00:16:18.536710 | orchestrator | 2025-04-17 00:16:18.536716 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-17 00:16:18.536734 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-17 00:16:18.616257 | orchestrator | 2025-04-17 00:16:18.616410 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-17 00:16:18.616450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-17 00:16:19.336628 | orchestrator | 2025-04-17 00:16:19.336771 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-17 00:16:19.336806 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:19.942958 | orchestrator | 2025-04-17 00:16:19.943152 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-17 00:16:19.943195 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:20.664224 | orchestrator | 2025-04-17 00:16:20.664394 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-17 00:16:20.664434 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:26.443977 | orchestrator | 2025-04-17 00:16:26.444171 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-17 00:16:26.444205 | orchestrator | changed: [testbed-manager] 2025-04-17 00:16:27.392990 | orchestrator | 2025-04-17 00:16:27.393209 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-17 00:16:27.393272 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:49.547455 | orchestrator | 2025-04-17 00:16:49.547636 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-17 00:16:49.547679 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-17 00:16:49.604226 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:49.604372 | orchestrator | 2025-04-17 00:16:49.604392 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-17 00:16:49.604430 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:49.641189 | orchestrator | 2025-04-17 00:16:49.641310 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-17 00:16:49.641328 | orchestrator | 2025-04-17 00:16:49.641343 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-17 00:16:49.641374 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:49.697375 | orchestrator | 2025-04-17 00:16:49.697485 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-17 00:16:49.697519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-17 00:16:50.466651 | orchestrator | 2025-04-17 00:16:50.466804 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-17 00:16:50.466843 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:50.539594 | orchestrator | 2025-04-17 00:16:50.539740 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-17 00:16:50.539779 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:50.597546 | orchestrator | 2025-04-17 00:16:50.597681 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-17 00:16:50.597714 | orchestrator | ok: [testbed-manager] => { 2025-04-17 00:16:51.256570 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-17 00:16:51.256703 | orchestrator | } 2025-04-17 00:16:51.256717 | orchestrator | 2025-04-17 00:16:51.256730 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-17 00:16:51.256757 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:52.131604 | orchestrator | 2025-04-17 00:16:52.131786 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-17 00:16:52.131887 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:52.192200 | orchestrator | 2025-04-17 00:16:52.192348 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-17 00:16:52.192386 | orchestrator | ok: [testbed-manager] 2025-04-17 00:16:52.236849 | orchestrator | 2025-04-17 00:16:52.236988 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-17 00:16:52.237040 | orchestrator | ok: [testbed-manager] => { 2025-04-17 00:16:52.294978 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-17 00:16:52.295190 | orchestrator | } 2025-04-17 00:16:52.295211 | orchestrator | 2025-04-17 00:16:52.295226 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-17 00:16:52.295259 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:52.345335 | orchestrator | 2025-04-17 00:16:52.345462 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-17 00:16:52.345497 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:52.398153 | orchestrator | 2025-04-17 00:16:52.398317 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-17 00:16:52.398359 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:52.460268 | orchestrator | 2025-04-17 00:16:52.460399 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-17 00:16:52.460432 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:52.522518 | orchestrator | 2025-04-17 00:16:52.522654 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-17 00:16:52.522689 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:52.574490 | orchestrator | 2025-04-17 00:16:52.574611 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-17 00:16:52.574646 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:16:54.134719 | orchestrator | 2025-04-17 00:16:54.134896 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-17 00:16:54.234450 | orchestrator | changed: [testbed-manager] 2025-04-17 00:17:54.260798 | orchestrator | 2025-04-17 00:17:54.260920 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-17 00:17:54.260930 | orchestrator | ok: [testbed-manager] 2025-04-17 00:17:54.260937 | orchestrator | 2025-04-17 00:17:54.260943 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-17 00:17:54.260961 | orchestrator | Pausing for 60 seconds 2025-04-17 00:17:54.323339 | orchestrator | changed: [testbed-manager] 2025-04-17 00:17:54.323422 | orchestrator | 2025-04-17 00:17:54.323430 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-17 00:17:54.323450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-17 00:21:34.049482 | orchestrator | 2025-04-17 00:21:34.049665 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-17 00:21:34.049709 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-17 00:21:36.157829 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-17 00:21:36.157984 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-17 00:21:36.158004 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-17 00:21:36.158078 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-17 00:21:36.158098 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-17 00:21:36.158113 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-17 00:21:36.158127 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-17 00:21:36.158142 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-17 00:21:36.158196 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-17 00:21:36.158259 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-17 00:21:36.158276 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-17 00:21:36.158291 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-17 00:21:36.158306 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-17 00:21:36.158320 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-17 00:21:36.158334 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-17 00:21:36.158350 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-17 00:21:36.158365 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-04-17 00:21:36.158381 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-04-17 00:21:36.158411 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-04-17 00:21:36.158428 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-04-17 00:21:36.158444 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:36.158461 | orchestrator | 2025-04-17 00:21:36.158478 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-17 00:21:36.158494 | orchestrator | 2025-04-17 00:21:36.158510 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:21:36.158546 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:36.279446 | orchestrator | 2025-04-17 00:21:36.279589 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-17 00:21:36.279627 | orchestrator | included: osism.services.manager for testbed-manager 2025-04-17 00:21:36.337780 | orchestrator | 2025-04-17 00:21:36.337856 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-17 00:21:36.337888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-17 00:21:38.173185 | orchestrator | 2025-04-17 00:21:38.173350 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-17 00:21:38.173392 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:38.232875 | orchestrator | 2025-04-17 00:21:38.233023 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-17 00:21:38.233060 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:38.332440 | orchestrator | 2025-04-17 00:21:38.332592 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-17 00:21:38.332631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-17 00:21:41.106759 | orchestrator | 2025-04-17 00:21:41.106939 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-17 00:21:41.106997 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-17 00:21:41.731778 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-17 00:21:41.731907 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-17 00:21:41.731918 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-17 00:21:41.731926 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-17 00:21:41.731934 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-17 00:21:41.731942 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-17 00:21:41.731950 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-17 00:21:41.731957 | orchestrator | 2025-04-17 00:21:41.731965 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-17 00:21:41.731987 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:41.823460 | orchestrator | 2025-04-17 00:21:41.823564 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-17 00:21:41.823586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-17 00:21:42.999246 | orchestrator | 2025-04-17 00:21:42.999382 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-17 00:21:42.999422 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-17 00:21:43.616375 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-17 00:21:43.616505 | orchestrator | 2025-04-17 00:21:43.616525 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-17 00:21:43.616559 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:43.675986 | orchestrator | 2025-04-17 00:21:43.676086 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-17 00:21:43.676119 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:21:43.729096 | orchestrator | 2025-04-17 00:21:43.729265 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-17 00:21:43.729297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-17 00:21:45.096560 | orchestrator | 2025-04-17 00:21:45.096700 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-17 00:21:45.096739 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:21:45.728807 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:21:45.728943 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:45.728965 | orchestrator | 2025-04-17 00:21:45.728982 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-17 00:21:45.729015 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:45.819679 | orchestrator | 2025-04-17 00:21:45.819815 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-17 00:21:45.819851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-17 00:21:46.448695 | orchestrator | 2025-04-17 00:21:46.448803 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-17 00:21:46.448827 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:21:47.035485 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:47.035620 | orchestrator | 2025-04-17 00:21:47.035643 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-17 00:21:47.035678 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:47.116991 | orchestrator | 2025-04-17 00:21:47.117105 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-17 00:21:47.117140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-17 00:21:47.729938 | orchestrator | 2025-04-17 00:21:47.730135 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-17 00:21:47.730213 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:48.118464 | orchestrator | 2025-04-17 00:21:48.118594 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-17 00:21:48.118633 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:49.327229 | orchestrator | 2025-04-17 00:21:49.327370 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-17 00:21:49.327407 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-17 00:21:50.048344 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-17 00:21:50.048471 | orchestrator | 2025-04-17 00:21:50.048490 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-17 00:21:50.048520 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:50.454426 | orchestrator | 2025-04-17 00:21:50.454579 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-17 00:21:50.454637 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:50.802171 | orchestrator | 2025-04-17 00:21:50.802315 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-17 00:21:50.802371 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:50.841489 | orchestrator | 2025-04-17 00:21:50.841567 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-17 00:21:50.841597 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:21:50.941244 | orchestrator | 2025-04-17 00:21:50.941416 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-17 00:21:50.941460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-17 00:21:50.984578 | orchestrator | 2025-04-17 00:21:50.984692 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-17 00:21:50.984727 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:53.029375 | orchestrator | 2025-04-17 00:21:53.029544 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-17 00:21:53.029586 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-17 00:21:53.710997 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-17 00:21:53.711137 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-17 00:21:53.711153 | orchestrator | 2025-04-17 00:21:53.711167 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-17 00:21:53.711252 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:54.412892 | orchestrator | 2025-04-17 00:21:54.413047 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-04-17 00:21:54.413088 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:55.143978 | orchestrator | 2025-04-17 00:21:55.144130 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-17 00:21:55.144170 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:55.227417 | orchestrator | 2025-04-17 00:21:55.227561 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-17 00:21:55.227600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-17 00:21:55.289897 | orchestrator | 2025-04-17 00:21:55.290076 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-17 00:21:55.290116 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:55.997498 | orchestrator | 2025-04-17 00:21:55.997654 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-17 00:21:55.997694 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-17 00:21:56.076962 | orchestrator | 2025-04-17 00:21:56.077122 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-17 00:21:56.077161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-17 00:21:56.791248 | orchestrator | 2025-04-17 00:21:56.791403 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-17 00:21:56.791442 | orchestrator | changed: [testbed-manager] 2025-04-17 00:21:57.412834 | orchestrator | 2025-04-17 00:21:57.412991 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-17 00:21:57.413033 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:57.470147 | orchestrator | 2025-04-17 00:21:57.470311 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-17 00:21:57.470348 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:21:57.522814 | orchestrator | 2025-04-17 00:21:57.522930 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-17 00:21:57.522965 | orchestrator | ok: [testbed-manager] 2025-04-17 00:21:58.336565 | orchestrator | 2025-04-17 00:21:58.336707 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-17 00:21:58.336737 | orchestrator | changed: [testbed-manager] 2025-04-17 00:22:38.508659 | orchestrator | 2025-04-17 00:22:38.508836 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-17 00:22:38.508878 | orchestrator | changed: [testbed-manager] 2025-04-17 00:22:39.167137 | orchestrator | 2025-04-17 00:22:39.167381 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-17 00:22:39.167467 | orchestrator | ok: [testbed-manager] 2025-04-17 00:22:41.789794 | orchestrator | 2025-04-17 00:22:41.789922 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-17 00:22:41.789951 | orchestrator | changed: [testbed-manager] 2025-04-17 00:22:41.857233 | orchestrator | 2025-04-17 00:22:41.857388 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-17 00:22:41.857416 | orchestrator | ok: [testbed-manager] 2025-04-17 00:22:41.904067 | orchestrator | 2025-04-17 00:22:41.904220 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-17 00:22:41.904240 | orchestrator | 2025-04-17 00:22:41.904256 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-17 00:22:41.904326 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:23:41.957904 | orchestrator | 2025-04-17 00:23:41.958152 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-17 00:23:41.958199 | orchestrator | Pausing for 60 seconds 2025-04-17 00:23:47.377462 | orchestrator | changed: [testbed-manager] 2025-04-17 00:23:47.377638 | orchestrator | 2025-04-17 00:23:47.377661 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-17 00:23:47.377697 | orchestrator | changed: [testbed-manager] 2025-04-17 00:24:28.955885 | orchestrator | 2025-04-17 00:24:28.956032 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-17 00:24:28.956072 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-17 00:24:34.385576 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-17 00:24:34.385722 | orchestrator | changed: [testbed-manager] 2025-04-17 00:24:34.385745 | orchestrator | 2025-04-17 00:24:34.385773 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-17 00:24:34.385807 | orchestrator | changed: [testbed-manager] 2025-04-17 00:24:34.499714 | orchestrator | 2025-04-17 00:24:34.499844 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-17 00:24:34.499883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-17 00:24:34.566163 | orchestrator | 2025-04-17 00:24:34.566277 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-17 00:24:34.566289 | orchestrator | 2025-04-17 00:24:34.566299 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-17 00:24:34.566324 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:24:34.688041 | orchestrator | 2025-04-17 00:24:34.688162 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:24:34.688179 | orchestrator | testbed-manager : ok=109 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-17 00:24:34.688193 | orchestrator | 2025-04-17 00:24:34.688222 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-17 00:24:34.694359 | orchestrator | + deactivate 2025-04-17 00:24:34.694403 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-17 00:24:34.694444 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-17 00:24:34.694460 | orchestrator | + export PATH 2025-04-17 00:24:34.694475 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-17 00:24:34.694491 | orchestrator | + '[' -n '' ']' 2025-04-17 00:24:34.694505 | orchestrator | + hash -r 2025-04-17 00:24:34.694519 | orchestrator | + '[' -n '' ']' 2025-04-17 00:24:34.694534 | orchestrator | + unset VIRTUAL_ENV 2025-04-17 00:24:34.694561 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-17 00:24:34.694580 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-17 00:24:34.694606 | orchestrator | + unset -f deactivate 2025-04-17 00:24:34.694631 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-17 00:24:34.694669 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-17 00:24:34.695642 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-17 00:24:34.695682 | orchestrator | + local max_attempts=60 2025-04-17 00:24:34.695709 | orchestrator | + local name=ceph-ansible 2025-04-17 00:24:34.695735 | orchestrator | + local attempt_num=1 2025-04-17 00:24:34.695769 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-17 00:24:34.728973 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-17 00:24:34.729540 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-17 00:24:34.729568 | orchestrator | + local max_attempts=60 2025-04-17 00:24:34.729581 | orchestrator | + local name=kolla-ansible 2025-04-17 00:24:34.729594 | orchestrator | + local attempt_num=1 2025-04-17 00:24:34.729612 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-17 00:24:34.751839 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-17 00:24:34.752516 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-17 00:24:34.752569 | orchestrator | + local max_attempts=60 2025-04-17 00:24:34.752583 | orchestrator | + local name=osism-ansible 2025-04-17 00:24:34.752596 | orchestrator | + local attempt_num=1 2025-04-17 00:24:34.752617 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-17 00:24:34.777963 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-17 00:24:35.471912 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-17 00:24:35.472034 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-17 00:24:35.472071 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-17 00:24:35.525614 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-17 00:24:35.722724 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-17 00:24:35.722881 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-17 00:24:35.722924 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-17 00:24:35.730199 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730249 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730265 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-04-17 00:24:35.730305 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-04-17 00:24:35.730321 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730390 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730406 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730451 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-04-17 00:24:35.730467 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730482 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-04-17 00:24:35.730496 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730511 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730558 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-04-17 00:24:35.730573 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730587 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730602 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730619 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-04-17 00:24:35.730658 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-17 00:24:35.876697 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-17 00:24:35.885360 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-04-17 00:24:35.885484 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-04-17 00:24:35.885504 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-04-17 00:24:35.885520 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-04-17 00:24:35.885549 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-17 00:24:35.944201 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-17 00:24:35.947690 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-17 00:24:35.947741 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-17 00:24:37.477932 | orchestrator | 2025-04-17 00:24:37 | INFO  | Task deebde27-26ae-4b02-8af6-28eb765fd6d1 (resolvconf) was prepared for execution. 2025-04-17 00:24:40.418351 | orchestrator | 2025-04-17 00:24:37 | INFO  | It takes a moment until task deebde27-26ae-4b02-8af6-28eb765fd6d1 (resolvconf) has been started and output is visible here. 2025-04-17 00:24:40.418599 | orchestrator | 2025-04-17 00:24:40.418960 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-17 00:24:40.419590 | orchestrator | 2025-04-17 00:24:40.420219 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:24:40.422091 | orchestrator | Thursday 17 April 2025 00:24:40 +0000 (0:00:00.083) 0:00:00.083 ******** 2025-04-17 00:24:44.400592 | orchestrator | ok: [testbed-manager] 2025-04-17 00:24:44.401053 | orchestrator | 2025-04-17 00:24:44.401104 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-17 00:24:44.401141 | orchestrator | Thursday 17 April 2025 00:24:44 +0000 (0:00:03.984) 0:00:04.068 ******** 2025-04-17 00:24:44.456496 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:24:44.458076 | orchestrator | 2025-04-17 00:24:44.458262 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-17 00:24:44.535472 | orchestrator | Thursday 17 April 2025 00:24:44 +0000 (0:00:00.056) 0:00:04.124 ******** 2025-04-17 00:24:44.535669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-17 00:24:44.536186 | orchestrator | 2025-04-17 00:24:44.537331 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-17 00:24:44.537916 | orchestrator | Thursday 17 April 2025 00:24:44 +0000 (0:00:00.078) 0:00:04.203 ******** 2025-04-17 00:24:44.612863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-17 00:24:45.717390 | orchestrator | 2025-04-17 00:24:45.717571 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-17 00:24:45.717587 | orchestrator | Thursday 17 April 2025 00:24:44 +0000 (0:00:00.076) 0:00:04.280 ******** 2025-04-17 00:24:45.717615 | orchestrator | ok: [testbed-manager] 2025-04-17 00:24:45.718201 | orchestrator | 2025-04-17 00:24:45.718225 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-17 00:24:45.718242 | orchestrator | Thursday 17 April 2025 00:24:45 +0000 (0:00:01.102) 0:00:05.382 ******** 2025-04-17 00:24:45.763057 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:24:45.763257 | orchestrator | 2025-04-17 00:24:45.763281 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-17 00:24:45.763500 | orchestrator | Thursday 17 April 2025 00:24:45 +0000 (0:00:00.049) 0:00:05.431 ******** 2025-04-17 00:24:46.244508 | orchestrator | ok: [testbed-manager] 2025-04-17 00:24:46.244669 | orchestrator | 2025-04-17 00:24:46.245947 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-17 00:24:46.247076 | orchestrator | Thursday 17 April 2025 00:24:46 +0000 (0:00:00.479) 0:00:05.911 ******** 2025-04-17 00:24:46.320821 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:24:46.321767 | orchestrator | 2025-04-17 00:24:46.323149 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-17 00:24:46.323521 | orchestrator | Thursday 17 April 2025 00:24:46 +0000 (0:00:00.076) 0:00:05.988 ******** 2025-04-17 00:24:46.912247 | orchestrator | changed: [testbed-manager] 2025-04-17 00:24:46.913094 | orchestrator | 2025-04-17 00:24:46.913139 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-17 00:24:46.914259 | orchestrator | Thursday 17 April 2025 00:24:46 +0000 (0:00:00.591) 0:00:06.579 ******** 2025-04-17 00:24:47.945642 | orchestrator | changed: [testbed-manager] 2025-04-17 00:24:47.946299 | orchestrator | 2025-04-17 00:24:47.946919 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-17 00:24:47.947554 | orchestrator | Thursday 17 April 2025 00:24:47 +0000 (0:00:01.031) 0:00:07.611 ******** 2025-04-17 00:24:48.897390 | orchestrator | ok: [testbed-manager] 2025-04-17 00:24:48.897897 | orchestrator | 2025-04-17 00:24:48.898460 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-17 00:24:48.899495 | orchestrator | Thursday 17 April 2025 00:24:48 +0000 (0:00:00.951) 0:00:08.563 ******** 2025-04-17 00:24:48.968337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-17 00:24:48.968520 | orchestrator | 2025-04-17 00:24:48.968542 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-17 00:24:48.968565 | orchestrator | Thursday 17 April 2025 00:24:48 +0000 (0:00:00.071) 0:00:08.634 ******** 2025-04-17 00:24:50.125721 | orchestrator | changed: [testbed-manager] 2025-04-17 00:24:50.126009 | orchestrator | 2025-04-17 00:24:50.128132 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:24:50.129057 | orchestrator | 2025-04-17 00:24:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:24:50.130477 | orchestrator | 2025-04-17 00:24:50 | INFO  | Please wait and do not abort execution. 2025-04-17 00:24:50.130588 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:24:50.130624 | orchestrator | 2025-04-17 00:24:50.131293 | orchestrator | Thursday 17 April 2025 00:24:50 +0000 (0:00:01.156) 0:00:09.791 ******** 2025-04-17 00:24:50.132194 | orchestrator | =============================================================================== 2025-04-17 00:24:50.133043 | orchestrator | Gathering Facts --------------------------------------------------------- 3.98s 2025-04-17 00:24:50.135298 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-04-17 00:24:50.135901 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2025-04-17 00:24:50.136452 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2025-04-17 00:24:50.137112 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-04-17 00:24:50.137803 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2025-04-17 00:24:50.138240 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-04-17 00:24:50.138604 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-04-17 00:24:50.139874 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-04-17 00:24:50.140817 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-04-17 00:24:50.140848 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-04-17 00:24:50.141238 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-04-17 00:24:50.141591 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-04-17 00:24:50.492076 | orchestrator | + osism apply sshconfig 2025-04-17 00:24:51.868254 | orchestrator | 2025-04-17 00:24:51 | INFO  | Task a2856bca-8271-494f-a87e-7fd4bdd05dc8 (sshconfig) was prepared for execution. 2025-04-17 00:24:54.797996 | orchestrator | 2025-04-17 00:24:51 | INFO  | It takes a moment until task a2856bca-8271-494f-a87e-7fd4bdd05dc8 (sshconfig) has been started and output is visible here. 2025-04-17 00:24:54.798426 | orchestrator | 2025-04-17 00:24:54.799355 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-17 00:24:54.799415 | orchestrator | 2025-04-17 00:24:54.800207 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-17 00:24:54.800915 | orchestrator | Thursday 17 April 2025 00:24:54 +0000 (0:00:00.101) 0:00:00.101 ******** 2025-04-17 00:24:55.362613 | orchestrator | ok: [testbed-manager] 2025-04-17 00:24:55.363334 | orchestrator | 2025-04-17 00:24:55.364136 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-17 00:24:55.364876 | orchestrator | Thursday 17 April 2025 00:24:55 +0000 (0:00:00.567) 0:00:00.668 ******** 2025-04-17 00:24:55.851756 | orchestrator | changed: [testbed-manager] 2025-04-17 00:24:55.852225 | orchestrator | 2025-04-17 00:24:55.852268 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-17 00:24:55.852592 | orchestrator | Thursday 17 April 2025 00:24:55 +0000 (0:00:00.490) 0:00:01.158 ******** 2025-04-17 00:25:01.449671 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-17 00:25:01.450295 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-17 00:25:01.450343 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-17 00:25:01.450370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-17 00:25:01.453721 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-17 00:25:01.454188 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-17 00:25:01.454243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-17 00:25:01.454515 | orchestrator | 2025-04-17 00:25:01.455593 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-17 00:25:01.522492 | orchestrator | Thursday 17 April 2025 00:25:01 +0000 (0:00:05.595) 0:00:06.753 ******** 2025-04-17 00:25:01.522765 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:25:01.522832 | orchestrator | 2025-04-17 00:25:01.522854 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-17 00:25:01.523198 | orchestrator | Thursday 17 April 2025 00:25:01 +0000 (0:00:00.075) 0:00:06.829 ******** 2025-04-17 00:25:02.083682 | orchestrator | changed: [testbed-manager] 2025-04-17 00:25:02.084246 | orchestrator | 2025-04-17 00:25:02.084278 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:25:02.084842 | orchestrator | 2025-04-17 00:25:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:25:02.085076 | orchestrator | 2025-04-17 00:25:02 | INFO  | Please wait and do not abort execution. 2025-04-17 00:25:02.085244 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:25:02.086075 | orchestrator | 2025-04-17 00:25:02.086277 | orchestrator | Thursday 17 April 2025 00:25:02 +0000 (0:00:00.561) 0:00:07.390 ******** 2025-04-17 00:25:02.087154 | orchestrator | =============================================================================== 2025-04-17 00:25:02.087433 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.60s 2025-04-17 00:25:02.088248 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-04-17 00:25:02.088504 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-04-17 00:25:02.088933 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-04-17 00:25:02.089271 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-04-17 00:25:02.511370 | orchestrator | + osism apply known-hosts 2025-04-17 00:25:03.952514 | orchestrator | 2025-04-17 00:25:03 | INFO  | Task b3fcd7d1-661e-48ec-af63-3378674229fb (known-hosts) was prepared for execution. 2025-04-17 00:25:06.984569 | orchestrator | 2025-04-17 00:25:03 | INFO  | It takes a moment until task b3fcd7d1-661e-48ec-af63-3378674229fb (known-hosts) has been started and output is visible here. 2025-04-17 00:25:06.984792 | orchestrator | 2025-04-17 00:25:06.985785 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-17 00:25:06.985853 | orchestrator | 2025-04-17 00:25:06.987376 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-17 00:25:06.987846 | orchestrator | Thursday 17 April 2025 00:25:06 +0000 (0:00:00.114) 0:00:00.114 ******** 2025-04-17 00:25:12.687776 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-17 00:25:12.688826 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-17 00:25:12.689577 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-17 00:25:12.690599 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-17 00:25:12.691223 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-17 00:25:12.691783 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-17 00:25:12.692317 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-17 00:25:12.692927 | orchestrator | 2025-04-17 00:25:12.693343 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-17 00:25:12.694426 | orchestrator | Thursday 17 April 2025 00:25:12 +0000 (0:00:05.704) 0:00:05.818 ******** 2025-04-17 00:25:12.849559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-17 00:25:12.849802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-17 00:25:12.850536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-17 00:25:12.851203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-17 00:25:12.852681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-17 00:25:12.853031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-17 00:25:12.853570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-17 00:25:12.854491 | orchestrator | 2025-04-17 00:25:12.854869 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:12.854901 | orchestrator | Thursday 17 April 2025 00:25:12 +0000 (0:00:00.164) 0:00:05.983 ******** 2025-04-17 00:25:14.021019 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2DtV92vTsa2FYThb/qdEq8kbj6vRd2EAqMiClqH36fmlpDVReH+KKGkCUa15nnP0qGYULg/GncGt1vgg8krTDYqW1XeEROiYTpWvXQopn9nrddugpS01Tb/2feuQMoOvgQ/cNVz/eFMCA34oHHVBX9nYxh9RckX9dChl7BI+pIJl3i01d8sVhaoMiuEAouxlUcNaYm5tkE4ydkG6XYKKttnRGpu/4hBE1teoioIjMq4sly9WbjdUTU3yhQfHZh9r642FEQZbmhoGqbklULIeQb7F4AT80D3aeITQT8AFXowe9Xxn6Xj1UrSRXPYq42j9TsLQJawz/E3DWnhzvtCPURPhNmImWgb8m58h0e6alUOB+0X0kiKeYzFV2qwAb5bYFw6DgwZVpD5K0AC0Vlce/ioX0KwvRHkaFDhmiylOHyHQIRDxKNFU24+bLsfIY+a+QNEwXT4Y3dpJOE1FkqsDKslYzqvCvSZVzHgMrNZrfcmDG/NHMdHi23xz2XlNjXc0=) 2025-04-17 00:25:14.022388 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFoHY4f6c0OLfT1aCOJMgb3jRA57Ofbby8qFScIOFcRtrEsfocWmQCvhm2aDKu9dJCcnJXihvOSxL9XHt+jXkr0=) 2025-04-17 00:25:14.022666 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFu3i+KvomY4G0yf+kfXZP9A1G0nzdHfOcqlrF2TUIKO) 2025-04-17 00:25:14.023318 | orchestrator | 2025-04-17 00:25:14.023579 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:14.024155 | orchestrator | Thursday 17 April 2025 00:25:14 +0000 (0:00:01.169) 0:00:07.152 ******** 2025-04-17 00:25:15.058825 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6Dbp0q6ajsh6QhIPSQGzVER9zdkkVgcOGwLN5AwWWh) 2025-04-17 00:25:15.060187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqe/cTmcFvznG3HRdM2Rwjy3Y3VQXK41cGnngn24eY1YlPwzlH/M3Gid2CR+NbaDNxXWWINFFSiVo04cDm8IFxz67impGMQCBeLhQLC5NXqDi38cuB+g0Ra6wsmNZ29K0yVfWE/KNZDsfa3GVnSOE9peRqABChTPvnY47IBToWC0cVnzuBiwprO/mkYK3HEe2MdrxTwz0fRFN9yVan3NkEHFbn9emA/9f+eoPdNKMp/XvdFhw7pC0YDeob+ktRFZGaiWyY2NkaMjPlwSUZ3ewFi4GhVXy/RNt7XOAAwELJwTciy5rz6DQr3ei1txeVQy6czb6ZsfAW4UHuj+EYj4rYzJ1lEYgRLtcPPoNxlfP9ri53xf7T4OoW1vZMrNlo8S/gyFc6XXyejmv+glDzYm2V9FeWTnw/D2MGEe0Ffu3pWaRZTHeTRcGFX004SzkC3LiJ9DDe/10QlWP5kJHxaNH858q7rDBA1A8ulgB1OtN97FQRrGwd63afN2xD6hyZzM0=) 2025-04-17 00:25:15.060223 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4RCbHwTHn0lVyBG6LEkYLREjoBXftxkncGtmZYy2WJ8jnZ44drN27H1C3x6p05ySElkGTQ5IWMkaj6Eq0ASvM=) 2025-04-17 00:25:15.060245 | orchestrator | 2025-04-17 00:25:15.060414 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:16.073042 | orchestrator | Thursday 17 April 2025 00:25:15 +0000 (0:00:01.036) 0:00:08.189 ******** 2025-04-17 00:25:16.073317 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEAYFw1PsSAHzyLWNmgMNa4j8LxLUF5eFU5Bzn/DUK4nkZ34P4wu/ZVrFJMSUFLIW/XAAH9GQuBvTPmucWBEAGw=) 2025-04-17 00:25:16.074211 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL1ZgaB5Fy1mowSogJZgWY4OpKQi3EzXZZTLQ1gweOvB) 2025-04-17 00:25:16.074291 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDlRVDuTTTdvV5uq0iPWGws7etLEbucvcENDUyzrG8/SlrAdt2YkE91L+GcQ490YiGyWB2CLLiOI94XDJ2+lV20XRD8KfhDusNQO5G61QdmDJgOzG1YI+cUXKrbFXfEZG9zNz78Nqi4lIYU4lA8DTYfaGO+DZtEnpnI0eTrAhgei3sHP1iCVxgYbHGrDRUKbxUdy9NuNR8VcQ59Yo1LBTuN/DkmMSm0rl0U2sDfPS5tqxwT3Mzqml06+M6yQRG6hbGE2FlX+/VZrbLgzZvASzOwFi7yN3CyXEMNQC4JamiXY5Jr7JwiSAIOBZXFizTZW5Yc8TFTNDbAkYkrkKxClKx+QqWt+Ksgu5J0oKP3Yy6chWO09SOXEF4b5BjibzDDyGqpmJT2YVeqAmxZjXHa7s8D8RLP5NsU9cAmfP4UGMLozqf+jaLug21V4jcnF6J1dArk9nQntkzq8mbJY25kpMBUqtybBR4F2pVFJImjslgj25THZh8MscQGwoNo3ydeOk=) 2025-04-17 00:25:16.075257 | orchestrator | 2025-04-17 00:25:16.076614 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:16.077513 | orchestrator | Thursday 17 April 2025 00:25:16 +0000 (0:00:01.015) 0:00:09.205 ******** 2025-04-17 00:25:17.094903 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYzqi3HINTdxjhTGo+V/br8c06llgd/oyLVm3TOq+8ArOfAksaBfTlCc3522Hpa+iAZE7mf+CZLeX1cHMznJqxjrhBlpYr6L75PoAecqnuKq//kiqkTRrXvSn3cMYxvB2A7eWCWkt8kkW02Zy7qfC3e8AbHZfrPDsuffuliHY7KQVWqjGccdo+SApVllnON7oAwUs33GChrWOb7guVCJ0sgT7v0seGkP8CEAqAl7u/MnVSN+hKIKX+V4us3Y8/Db6J8eLcZowaesrAz1xYw/3ft5H8ToZxHz95Munod81pKNo8On6bUpwjpfrE/wyAZlcEUTDFWba3Q29/bs2rR1zV0o9BPrikC6FcO2ZdAzlXZ2Hv5adxhCMfIcO5+MQ8Mzpocd7h/AxJzfhvuebHlCW4WGRjx1BoMbvQhDapVBUzTySYJSOH92eJd+XNx3lWxpji0+xecBoYcmBI33E2LX1MOsxzkRSeFzXIXrswF8mp8+WdZTMT+LBhzfkxMzM5o08=) 2025-04-17 00:25:17.095142 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPCT9V0JL1Lzfo4aaXTtQqe9XMEkuJl+UjRz1JkxY8WyMCTIHxCIMyaV8jXUr2RearURJItOk4WoK9XZFCqXZfo=) 2025-04-17 00:25:17.095174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK/a35PAAIhZEToc6n8Ee7M7scdj6N1SYk89iJVVF7mY) 2025-04-17 00:25:17.095307 | orchestrator | 2025-04-17 00:25:17.095640 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:17.095927 | orchestrator | Thursday 17 April 2025 00:25:17 +0000 (0:00:01.023) 0:00:10.228 ******** 2025-04-17 00:25:18.141591 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNl9L38A6ej4PESp3W1QR6RF1uZTavWsUZ/A/rz40506svuX+dY9iL0x0TOJpHu/F030DPpWxsn2b14EvPCHDHY=) 2025-04-17 00:25:18.141843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4oZpAnqaG+JWv5G8+hthCVUTTflpne+GqQnUR9ljqu4moJ926M4e2Z90YLUPbPUC2oGDpsE7UDQ67HNVFSs23aBab5gJHM5uVL6G5fpCUbRjiB9IPBy6Lo+74ug8Nb7f4LJeDR1UlVtKKn7iMTNyrn3pjc9zzAreZMhUQK7VzTXIetE62bno9lAVzqINPg1iMMn/8OZPUvRJHYlMJkQoAWkRyDhyv1TMIctgSmjXa+VgSCgv8jFb75qqVktZmuRTYB6SmHICPN+jny4I1Tdu72fBJgL5Jjudtb9DT2/psmzonsNa6OVH7dAYY7dNWkQIwumoeOknXbQndqpDZ2nz7iZBR8rfzlVocNxViJdYggF3OjNgcNweseVmJmZ8AAgmUzfoRdLhXlj93dWmt4BWd+BW9n8+bUiLEs6lduzbjh3krN4b1yDp4Xewis3hroTe9Dz4yS0zJ8XQZ5yBeek40Vuc6S2KO8aWFq6iRfyiBGVGPJLQzDSGnPLw8RU1sVmc=) 2025-04-17 00:25:18.141883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEA39EKaIlYyGt1tiY2/UVG9xd1Prh1A6fuElaZwR3Vf) 2025-04-17 00:25:18.142258 | orchestrator | 2025-04-17 00:25:18.142608 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:18.143389 | orchestrator | Thursday 17 April 2025 00:25:18 +0000 (0:00:01.044) 0:00:11.272 ******** 2025-04-17 00:25:19.190908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPflgymKniLKz+4b7U826S9Xzxfg7m2sK90QKGkAEiF6MorUGGdS9lOH68iwkRTs4KOPwQy+yqg3T1W3pciDSFY1N9WvePx2+gGFckEV9cBjmkIn7a10sdSIDq42q+tryAUopdImdTMzBG+GhuTaVuDw03zTCqAgUnCkXHCVaUS3Ct/01aR+hDf51S7ABdz2rh5QUfetlKBSjrZDqA6atha8E7WHVhzY81A8Vp6cTB9+dNkrIsrxH3rTcjLIvmIEf685w/xPGXqQ7KU4br0jZV9CidO2cft1JmBMfyEn+fV4QoDdDxDedWBwn/gNECcB3y8ioyqxrYaO5VUhrPC6A0mY3h5L/Gi+oEdjNt99AkanXQAqGNfiRyflpsxpcMer6lSkv/Le/KbKLDKxPodFmo4YPgTSru+koQpCLTVYyosXgryXIwCpnZuJoLzIAjDZbeUoOlqKu2LdNg73plbjA+os2vyvVcDk015heqmZsfdZe/lBQTbvaU/MfufB7tOy0=) 2025-04-17 00:25:19.191551 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPf+JAVjGSFh7a+UAX3Z28hLOqLNoo/skfZ9a9RbIEN/pgBUANhNMILHFIso8Hwq2dm2S3Z4JXSAJbl1ZLiBliA=) 2025-04-17 00:25:19.191961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQjLd9wxw/W+4KRLqAElEqo7jV5+/1Lf6gLL1VLK1XI) 2025-04-17 00:25:19.192259 | orchestrator | 2025-04-17 00:25:19.192574 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:19.192860 | orchestrator | Thursday 17 April 2025 00:25:19 +0000 (0:00:01.049) 0:00:12.322 ******** 2025-04-17 00:25:20.217696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKPNxNAdK7rnJ6quEI9pdQ8WZpNQlWmo1ypQpiaMoQ+9ZXot1+QSfCdkZmxI7sYkg8Jj28L4DXpcMfLAsIodr1vfAhwbvy2it/KWGA40gsSIx22vpYGdKbvCf6BCUtIWg8cnianpttch49DGBCVEHqg3MqE2gLd8EAPMneEi/p1gOdzpk9yGL0b1+40NLOfqwtkZFpGvICzgb93XYEUhPOYppqoGIh2Ib4Ndzgx8On9jwkrcbx0T2JvH96ymv+F1UjEctm77Ehjj6f2tuCHU1QGtZ7W7VM180DiaQCZRDGEZTQ85uGIuTk/EcI3C3SCZwbs+G6i98wHjdzQXfQh36UdECN0ULUOa8+AjDf5r8TTbHvejwpIvCQG1eJMtP6PzfnbL0MZm77ITKBUn2yXal9alAChJ9v1r/qEzwtMCBurd4k+ea5LTgha9CGDlTC0dploDBPcTFQwuOyXlO0aOPJb1gvNlgP7WJS8gul9tZaLWbUCPiO7XVTY5iDXeJUQsU=) 2025-04-17 00:25:20.218203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJb+vcoeayw8+wW08Nxof9PZQc6CUHBdYZoWPCeMPAyYW4riDOKfjp+4d3FZ1LkPzZ5ztfJd0epseq3uZYFkjpw=) 2025-04-17 00:25:20.218255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBq+1V6Z2ZkFnjkMze1x8QHyUWsPV2+Opfj8NEIlgWz) 2025-04-17 00:25:20.219155 | orchestrator | 2025-04-17 00:25:20.219529 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-17 00:25:20.220272 | orchestrator | Thursday 17 April 2025 00:25:20 +0000 (0:00:01.026) 0:00:13.349 ******** 2025-04-17 00:25:25.369986 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-17 00:25:25.371191 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-17 00:25:25.371274 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-17 00:25:25.371807 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-17 00:25:25.372908 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-17 00:25:25.376420 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-17 00:25:25.377026 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-17 00:25:25.377805 | orchestrator | 2025-04-17 00:25:25.378435 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-17 00:25:25.379178 | orchestrator | Thursday 17 April 2025 00:25:25 +0000 (0:00:05.150) 0:00:18.500 ******** 2025-04-17 00:25:25.537651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-17 00:25:25.538064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-17 00:25:25.538775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-17 00:25:25.539588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-17 00:25:25.540523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-17 00:25:25.540910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-17 00:25:25.541871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-17 00:25:25.542113 | orchestrator | 2025-04-17 00:25:25.542781 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:25.543637 | orchestrator | Thursday 17 April 2025 00:25:25 +0000 (0:00:00.168) 0:00:18.669 ******** 2025-04-17 00:25:26.560180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2DtV92vTsa2FYThb/qdEq8kbj6vRd2EAqMiClqH36fmlpDVReH+KKGkCUa15nnP0qGYULg/GncGt1vgg8krTDYqW1XeEROiYTpWvXQopn9nrddugpS01Tb/2feuQMoOvgQ/cNVz/eFMCA34oHHVBX9nYxh9RckX9dChl7BI+pIJl3i01d8sVhaoMiuEAouxlUcNaYm5tkE4ydkG6XYKKttnRGpu/4hBE1teoioIjMq4sly9WbjdUTU3yhQfHZh9r642FEQZbmhoGqbklULIeQb7F4AT80D3aeITQT8AFXowe9Xxn6Xj1UrSRXPYq42j9TsLQJawz/E3DWnhzvtCPURPhNmImWgb8m58h0e6alUOB+0X0kiKeYzFV2qwAb5bYFw6DgwZVpD5K0AC0Vlce/ioX0KwvRHkaFDhmiylOHyHQIRDxKNFU24+bLsfIY+a+QNEwXT4Y3dpJOE1FkqsDKslYzqvCvSZVzHgMrNZrfcmDG/NHMdHi23xz2XlNjXc0=) 2025-04-17 00:25:26.561012 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFoHY4f6c0OLfT1aCOJMgb3jRA57Ofbby8qFScIOFcRtrEsfocWmQCvhm2aDKu9dJCcnJXihvOSxL9XHt+jXkr0=) 2025-04-17 00:25:26.561057 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFu3i+KvomY4G0yf+kfXZP9A1G0nzdHfOcqlrF2TUIKO) 2025-04-17 00:25:26.561829 | orchestrator | 2025-04-17 00:25:26.562400 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:26.562863 | orchestrator | Thursday 17 April 2025 00:25:26 +0000 (0:00:01.021) 0:00:19.690 ******** 2025-04-17 00:25:27.631064 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqe/cTmcFvznG3HRdM2Rwjy3Y3VQXK41cGnngn24eY1YlPwzlH/M3Gid2CR+NbaDNxXWWINFFSiVo04cDm8IFxz67impGMQCBeLhQLC5NXqDi38cuB+g0Ra6wsmNZ29K0yVfWE/KNZDsfa3GVnSOE9peRqABChTPvnY47IBToWC0cVnzuBiwprO/mkYK3HEe2MdrxTwz0fRFN9yVan3NkEHFbn9emA/9f+eoPdNKMp/XvdFhw7pC0YDeob+ktRFZGaiWyY2NkaMjPlwSUZ3ewFi4GhVXy/RNt7XOAAwELJwTciy5rz6DQr3ei1txeVQy6czb6ZsfAW4UHuj+EYj4rYzJ1lEYgRLtcPPoNxlfP9ri53xf7T4OoW1vZMrNlo8S/gyFc6XXyejmv+glDzYm2V9FeWTnw/D2MGEe0Ffu3pWaRZTHeTRcGFX004SzkC3LiJ9DDe/10QlWP5kJHxaNH858q7rDBA1A8ulgB1OtN97FQRrGwd63afN2xD6hyZzM0=) 2025-04-17 00:25:27.632014 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG4RCbHwTHn0lVyBG6LEkYLREjoBXftxkncGtmZYy2WJ8jnZ44drN27H1C3x6p05ySElkGTQ5IWMkaj6Eq0ASvM=) 2025-04-17 00:25:27.632180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM6Dbp0q6ajsh6QhIPSQGzVER9zdkkVgcOGwLN5AwWWh) 2025-04-17 00:25:27.632379 | orchestrator | 2025-04-17 00:25:27.632429 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:27.632456 | orchestrator | Thursday 17 April 2025 00:25:27 +0000 (0:00:01.073) 0:00:20.763 ******** 2025-04-17 00:25:28.682408 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDlRVDuTTTdvV5uq0iPWGws7etLEbucvcENDUyzrG8/SlrAdt2YkE91L+GcQ490YiGyWB2CLLiOI94XDJ2+lV20XRD8KfhDusNQO5G61QdmDJgOzG1YI+cUXKrbFXfEZG9zNz78Nqi4lIYU4lA8DTYfaGO+DZtEnpnI0eTrAhgei3sHP1iCVxgYbHGrDRUKbxUdy9NuNR8VcQ59Yo1LBTuN/DkmMSm0rl0U2sDfPS5tqxwT3Mzqml06+M6yQRG6hbGE2FlX+/VZrbLgzZvASzOwFi7yN3CyXEMNQC4JamiXY5Jr7JwiSAIOBZXFizTZW5Yc8TFTNDbAkYkrkKxClKx+QqWt+Ksgu5J0oKP3Yy6chWO09SOXEF4b5BjibzDDyGqpmJT2YVeqAmxZjXHa7s8D8RLP5NsU9cAmfP4UGMLozqf+jaLug21V4jcnF6J1dArk9nQntkzq8mbJY25kpMBUqtybBR4F2pVFJImjslgj25THZh8MscQGwoNo3ydeOk=) 2025-04-17 00:25:28.683364 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEAYFw1PsSAHzyLWNmgMNa4j8LxLUF5eFU5Bzn/DUK4nkZ34P4wu/ZVrFJMSUFLIW/XAAH9GQuBvTPmucWBEAGw=) 2025-04-17 00:25:28.683437 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL1ZgaB5Fy1mowSogJZgWY4OpKQi3EzXZZTLQ1gweOvB) 2025-04-17 00:25:28.683484 | orchestrator | 2025-04-17 00:25:28.683839 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:28.684271 | orchestrator | Thursday 17 April 2025 00:25:28 +0000 (0:00:01.049) 0:00:21.813 ******** 2025-04-17 00:25:29.722091 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYzqi3HINTdxjhTGo+V/br8c06llgd/oyLVm3TOq+8ArOfAksaBfTlCc3522Hpa+iAZE7mf+CZLeX1cHMznJqxjrhBlpYr6L75PoAecqnuKq//kiqkTRrXvSn3cMYxvB2A7eWCWkt8kkW02Zy7qfC3e8AbHZfrPDsuffuliHY7KQVWqjGccdo+SApVllnON7oAwUs33GChrWOb7guVCJ0sgT7v0seGkP8CEAqAl7u/MnVSN+hKIKX+V4us3Y8/Db6J8eLcZowaesrAz1xYw/3ft5H8ToZxHz95Munod81pKNo8On6bUpwjpfrE/wyAZlcEUTDFWba3Q29/bs2rR1zV0o9BPrikC6FcO2ZdAzlXZ2Hv5adxhCMfIcO5+MQ8Mzpocd7h/AxJzfhvuebHlCW4WGRjx1BoMbvQhDapVBUzTySYJSOH92eJd+XNx3lWxpji0+xecBoYcmBI33E2LX1MOsxzkRSeFzXIXrswF8mp8+WdZTMT+LBhzfkxMzM5o08=) 2025-04-17 00:25:29.722322 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPCT9V0JL1Lzfo4aaXTtQqe9XMEkuJl+UjRz1JkxY8WyMCTIHxCIMyaV8jXUr2RearURJItOk4WoK9XZFCqXZfo=) 2025-04-17 00:25:29.723020 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK/a35PAAIhZEToc6n8Ee7M7scdj6N1SYk89iJVVF7mY) 2025-04-17 00:25:29.723636 | orchestrator | 2025-04-17 00:25:29.724549 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:29.725506 | orchestrator | Thursday 17 April 2025 00:25:29 +0000 (0:00:01.039) 0:00:22.853 ******** 2025-04-17 00:25:30.755741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4oZpAnqaG+JWv5G8+hthCVUTTflpne+GqQnUR9ljqu4moJ926M4e2Z90YLUPbPUC2oGDpsE7UDQ67HNVFSs23aBab5gJHM5uVL6G5fpCUbRjiB9IPBy6Lo+74ug8Nb7f4LJeDR1UlVtKKn7iMTNyrn3pjc9zzAreZMhUQK7VzTXIetE62bno9lAVzqINPg1iMMn/8OZPUvRJHYlMJkQoAWkRyDhyv1TMIctgSmjXa+VgSCgv8jFb75qqVktZmuRTYB6SmHICPN+jny4I1Tdu72fBJgL5Jjudtb9DT2/psmzonsNa6OVH7dAYY7dNWkQIwumoeOknXbQndqpDZ2nz7iZBR8rfzlVocNxViJdYggF3OjNgcNweseVmJmZ8AAgmUzfoRdLhXlj93dWmt4BWd+BW9n8+bUiLEs6lduzbjh3krN4b1yDp4Xewis3hroTe9Dz4yS0zJ8XQZ5yBeek40Vuc6S2KO8aWFq6iRfyiBGVGPJLQzDSGnPLw8RU1sVmc=) 2025-04-17 00:25:30.755983 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNl9L38A6ej4PESp3W1QR6RF1uZTavWsUZ/A/rz40506svuX+dY9iL0x0TOJpHu/F030DPpWxsn2b14EvPCHDHY=) 2025-04-17 00:25:30.756416 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEA39EKaIlYyGt1tiY2/UVG9xd1Prh1A6fuElaZwR3Vf) 2025-04-17 00:25:30.756937 | orchestrator | 2025-04-17 00:25:30.757534 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:30.757955 | orchestrator | Thursday 17 April 2025 00:25:30 +0000 (0:00:01.034) 0:00:23.887 ******** 2025-04-17 00:25:31.797745 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPflgymKniLKz+4b7U826S9Xzxfg7m2sK90QKGkAEiF6MorUGGdS9lOH68iwkRTs4KOPwQy+yqg3T1W3pciDSFY1N9WvePx2+gGFckEV9cBjmkIn7a10sdSIDq42q+tryAUopdImdTMzBG+GhuTaVuDw03zTCqAgUnCkXHCVaUS3Ct/01aR+hDf51S7ABdz2rh5QUfetlKBSjrZDqA6atha8E7WHVhzY81A8Vp6cTB9+dNkrIsrxH3rTcjLIvmIEf685w/xPGXqQ7KU4br0jZV9CidO2cft1JmBMfyEn+fV4QoDdDxDedWBwn/gNECcB3y8ioyqxrYaO5VUhrPC6A0mY3h5L/Gi+oEdjNt99AkanXQAqGNfiRyflpsxpcMer6lSkv/Le/KbKLDKxPodFmo4YPgTSru+koQpCLTVYyosXgryXIwCpnZuJoLzIAjDZbeUoOlqKu2LdNg73plbjA+os2vyvVcDk015heqmZsfdZe/lBQTbvaU/MfufB7tOy0=) 2025-04-17 00:25:31.797915 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPf+JAVjGSFh7a+UAX3Z28hLOqLNoo/skfZ9a9RbIEN/pgBUANhNMILHFIso8Hwq2dm2S3Z4JXSAJbl1ZLiBliA=) 2025-04-17 00:25:31.799384 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQjLd9wxw/W+4KRLqAElEqo7jV5+/1Lf6gLL1VLK1XI) 2025-04-17 00:25:31.800317 | orchestrator | 2025-04-17 00:25:31.800342 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-17 00:25:31.801117 | orchestrator | Thursday 17 April 2025 00:25:31 +0000 (0:00:01.042) 0:00:24.930 ******** 2025-04-17 00:25:32.845566 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBq+1V6Z2ZkFnjkMze1x8QHyUWsPV2+Opfj8NEIlgWz) 2025-04-17 00:25:32.845893 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKPNxNAdK7rnJ6quEI9pdQ8WZpNQlWmo1ypQpiaMoQ+9ZXot1+QSfCdkZmxI7sYkg8Jj28L4DXpcMfLAsIodr1vfAhwbvy2it/KWGA40gsSIx22vpYGdKbvCf6BCUtIWg8cnianpttch49DGBCVEHqg3MqE2gLd8EAPMneEi/p1gOdzpk9yGL0b1+40NLOfqwtkZFpGvICzgb93XYEUhPOYppqoGIh2Ib4Ndzgx8On9jwkrcbx0T2JvH96ymv+F1UjEctm77Ehjj6f2tuCHU1QGtZ7W7VM180DiaQCZRDGEZTQ85uGIuTk/EcI3C3SCZwbs+G6i98wHjdzQXfQh36UdECN0ULUOa8+AjDf5r8TTbHvejwpIvCQG1eJMtP6PzfnbL0MZm77ITKBUn2yXal9alAChJ9v1r/qEzwtMCBurd4k+ea5LTgha9CGDlTC0dploDBPcTFQwuOyXlO0aOPJb1gvNlgP7WJS8gul9tZaLWbUCPiO7XVTY5iDXeJUQsU=) 2025-04-17 00:25:32.846557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJb+vcoeayw8+wW08Nxof9PZQc6CUHBdYZoWPCeMPAyYW4riDOKfjp+4d3FZ1LkPzZ5ztfJd0epseq3uZYFkjpw=) 2025-04-17 00:25:32.847377 | orchestrator | 2025-04-17 00:25:32.848174 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-17 00:25:32.848543 | orchestrator | Thursday 17 April 2025 00:25:32 +0000 (0:00:01.045) 0:00:25.975 ******** 2025-04-17 00:25:33.006078 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-17 00:25:33.006916 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-17 00:25:33.006962 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-17 00:25:33.007848 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-17 00:25:33.008556 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-17 00:25:33.009643 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-17 00:25:33.010104 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-17 00:25:33.010678 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:25:33.011300 | orchestrator | 2025-04-17 00:25:33.011857 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-17 00:25:33.012340 | orchestrator | Thursday 17 April 2025 00:25:33 +0000 (0:00:00.163) 0:00:26.139 ******** 2025-04-17 00:25:33.076123 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:25:33.077303 | orchestrator | 2025-04-17 00:25:33.077342 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-17 00:25:33.077657 | orchestrator | Thursday 17 April 2025 00:25:33 +0000 (0:00:00.069) 0:00:26.209 ******** 2025-04-17 00:25:33.126253 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:25:33.126868 | orchestrator | 2025-04-17 00:25:33.127437 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-17 00:25:33.128143 | orchestrator | Thursday 17 April 2025 00:25:33 +0000 (0:00:00.051) 0:00:26.260 ******** 2025-04-17 00:25:33.832454 | orchestrator | changed: [testbed-manager] 2025-04-17 00:25:33.832975 | orchestrator | 2025-04-17 00:25:33.835316 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:25:33.836053 | orchestrator | 2025-04-17 00:25:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:25:33.836084 | orchestrator | 2025-04-17 00:25:33 | INFO  | Please wait and do not abort execution. 2025-04-17 00:25:33.836107 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:25:33.837355 | orchestrator | 2025-04-17 00:25:33.837923 | orchestrator | Thursday 17 April 2025 00:25:33 +0000 (0:00:00.703) 0:00:26.963 ******** 2025-04-17 00:25:33.838714 | orchestrator | =============================================================================== 2025-04-17 00:25:33.839432 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.70s 2025-04-17 00:25:33.840365 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.15s 2025-04-17 00:25:33.840592 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-04-17 00:25:33.841248 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-04-17 00:25:33.841969 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-04-17 00:25:33.842547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-04-17 00:25:33.843172 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-04-17 00:25:33.843589 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-04-17 00:25:33.844446 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-04-17 00:25:33.844950 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-04-17 00:25:33.845669 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-04-17 00:25:33.846244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-04-17 00:25:33.847447 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-04-17 00:25:33.847942 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-04-17 00:25:33.848488 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-04-17 00:25:33.848908 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-04-17 00:25:33.849333 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2025-04-17 00:25:33.849770 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-04-17 00:25:33.850168 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-04-17 00:25:33.850710 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-04-17 00:25:34.174422 | orchestrator | + osism apply squid 2025-04-17 00:25:35.568585 | orchestrator | 2025-04-17 00:25:35 | INFO  | Task 8b3dbd7c-a220-460b-8976-b33a16af3afc (squid) was prepared for execution. 2025-04-17 00:25:38.547381 | orchestrator | 2025-04-17 00:25:35 | INFO  | It takes a moment until task 8b3dbd7c-a220-460b-8976-b33a16af3afc (squid) has been started and output is visible here. 2025-04-17 00:25:38.547602 | orchestrator | 2025-04-17 00:25:38.548300 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-17 00:25:38.548436 | orchestrator | 2025-04-17 00:25:38.551754 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-17 00:25:38.551936 | orchestrator | Thursday 17 April 2025 00:25:38 +0000 (0:00:00.104) 0:00:00.104 ******** 2025-04-17 00:25:38.635559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-17 00:25:38.636562 | orchestrator | 2025-04-17 00:25:38.637912 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-17 00:25:38.638554 | orchestrator | Thursday 17 April 2025 00:25:38 +0000 (0:00:00.092) 0:00:00.197 ******** 2025-04-17 00:25:39.983196 | orchestrator | ok: [testbed-manager] 2025-04-17 00:25:39.983409 | orchestrator | 2025-04-17 00:25:39.983691 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-17 00:25:39.984861 | orchestrator | Thursday 17 April 2025 00:25:39 +0000 (0:00:01.346) 0:00:01.543 ******** 2025-04-17 00:25:41.096586 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-17 00:25:41.096874 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-17 00:25:41.099257 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-17 00:25:41.099780 | orchestrator | 2025-04-17 00:25:41.100234 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-17 00:25:41.100553 | orchestrator | Thursday 17 April 2025 00:25:41 +0000 (0:00:01.112) 0:00:02.656 ******** 2025-04-17 00:25:42.150012 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-17 00:25:42.150589 | orchestrator | 2025-04-17 00:25:42.151345 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-17 00:25:42.152043 | orchestrator | Thursday 17 April 2025 00:25:42 +0000 (0:00:01.053) 0:00:03.709 ******** 2025-04-17 00:25:42.505812 | orchestrator | ok: [testbed-manager] 2025-04-17 00:25:42.506268 | orchestrator | 2025-04-17 00:25:42.507457 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-17 00:25:42.508306 | orchestrator | Thursday 17 April 2025 00:25:42 +0000 (0:00:00.357) 0:00:04.067 ******** 2025-04-17 00:25:43.436960 | orchestrator | changed: [testbed-manager] 2025-04-17 00:25:43.437770 | orchestrator | 2025-04-17 00:25:43.437877 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-17 00:25:43.438921 | orchestrator | Thursday 17 April 2025 00:25:43 +0000 (0:00:00.930) 0:00:04.998 ******** 2025-04-17 00:26:15.043894 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-17 00:26:27.338099 | orchestrator | ok: [testbed-manager] 2025-04-17 00:26:27.338240 | orchestrator | 2025-04-17 00:26:27.338261 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-17 00:26:27.338276 | orchestrator | Thursday 17 April 2025 00:26:15 +0000 (0:00:31.595) 0:00:36.593 ******** 2025-04-17 00:26:27.338305 | orchestrator | changed: [testbed-manager] 2025-04-17 00:27:27.414810 | orchestrator | 2025-04-17 00:27:27.415012 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-17 00:27:27.415034 | orchestrator | Thursday 17 April 2025 00:26:27 +0000 (0:00:12.299) 0:00:48.893 ******** 2025-04-17 00:27:27.415065 | orchestrator | Pausing for 60 seconds 2025-04-17 00:27:27.482820 | orchestrator | changed: [testbed-manager] 2025-04-17 00:27:27.482986 | orchestrator | 2025-04-17 00:27:27.483010 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-17 00:27:27.483033 | orchestrator | Thursday 17 April 2025 00:27:27 +0000 (0:01:00.074) 0:01:48.968 ******** 2025-04-17 00:27:27.483074 | orchestrator | ok: [testbed-manager] 2025-04-17 00:27:27.483357 | orchestrator | 2025-04-17 00:27:27.483948 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-17 00:27:27.485033 | orchestrator | Thursday 17 April 2025 00:27:27 +0000 (0:00:00.075) 0:01:49.044 ******** 2025-04-17 00:27:28.074506 | orchestrator | changed: [testbed-manager] 2025-04-17 00:27:28.075742 | orchestrator | 2025-04-17 00:27:28.077929 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:27:28.078417 | orchestrator | 2025-04-17 00:27:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:27:28.078455 | orchestrator | 2025-04-17 00:27:28 | INFO  | Please wait and do not abort execution. 2025-04-17 00:27:28.078478 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:27:28.079552 | orchestrator | 2025-04-17 00:27:28.081080 | orchestrator | Thursday 17 April 2025 00:27:28 +0000 (0:00:00.591) 0:01:49.635 ******** 2025-04-17 00:27:28.081997 | orchestrator | =============================================================================== 2025-04-17 00:27:28.082158 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-04-17 00:27:28.082254 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.60s 2025-04-17 00:27:28.082488 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.30s 2025-04-17 00:27:28.083472 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.35s 2025-04-17 00:27:28.083718 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.11s 2025-04-17 00:27:28.083753 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-04-17 00:27:28.084096 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2025-04-17 00:27:28.084587 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-04-17 00:27:28.084986 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-04-17 00:27:28.085294 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-04-17 00:27:28.085528 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-04-17 00:27:28.478198 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-17 00:27:28.481739 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-04-17 00:27:28.481790 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-17 00:27:28.528169 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-17 00:27:28.530167 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-17 00:27:28.530222 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-04-17 00:27:28.530251 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-17 00:27:28.532394 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-17 00:27:28.538908 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-17 00:27:29.935804 | orchestrator | 2025-04-17 00:27:29 | INFO  | Task 16315438-9eaa-4fc2-bdc3-0ca7e2690dfa (operator) was prepared for execution. 2025-04-17 00:27:32.862633 | orchestrator | 2025-04-17 00:27:29 | INFO  | It takes a moment until task 16315438-9eaa-4fc2-bdc3-0ca7e2690dfa (operator) has been started and output is visible here. 2025-04-17 00:27:32.862821 | orchestrator | 2025-04-17 00:27:32.863084 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-17 00:27:32.863122 | orchestrator | 2025-04-17 00:27:32.863196 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-17 00:27:32.863265 | orchestrator | Thursday 17 April 2025 00:27:32 +0000 (0:00:00.086) 0:00:00.086 ******** 2025-04-17 00:27:37.058885 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:37.059510 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:37.059701 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:27:37.060301 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:27:37.060719 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:37.061320 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:27:37.061708 | orchestrator | 2025-04-17 00:27:37.062146 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-17 00:27:37.062538 | orchestrator | Thursday 17 April 2025 00:27:37 +0000 (0:00:04.196) 0:00:04.283 ******** 2025-04-17 00:27:37.844753 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:27:37.845605 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:27:37.845947 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:37.847328 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:37.852200 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:27:37.853048 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:37.853076 | orchestrator | 2025-04-17 00:27:37.853100 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-17 00:27:37.854658 | orchestrator | 2025-04-17 00:27:37.855380 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-17 00:27:37.855408 | orchestrator | Thursday 17 April 2025 00:27:37 +0000 (0:00:00.788) 0:00:05.071 ******** 2025-04-17 00:27:37.911483 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:27:37.930791 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:27:37.950252 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:27:37.985020 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:37.985331 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:37.985772 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:37.989397 | orchestrator | 2025-04-17 00:27:37.989904 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-17 00:27:37.990097 | orchestrator | Thursday 17 April 2025 00:27:37 +0000 (0:00:00.141) 0:00:05.212 ******** 2025-04-17 00:27:38.053260 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:27:38.073557 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:27:38.094555 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:27:38.133488 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:38.134200 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:38.134496 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:38.135149 | orchestrator | 2025-04-17 00:27:38.135863 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-17 00:27:38.135984 | orchestrator | Thursday 17 April 2025 00:27:38 +0000 (0:00:00.148) 0:00:05.360 ******** 2025-04-17 00:27:38.737540 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:38.738106 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:38.738971 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:38.742186 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:38.742595 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:38.742631 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:38.742654 | orchestrator | 2025-04-17 00:27:38.742688 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-17 00:27:38.743157 | orchestrator | Thursday 17 April 2025 00:27:38 +0000 (0:00:00.603) 0:00:05.964 ******** 2025-04-17 00:27:39.556077 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:39.557786 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:39.559859 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:39.563532 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:39.563750 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:39.564592 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:39.568895 | orchestrator | 2025-04-17 00:27:39.569488 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-17 00:27:39.570009 | orchestrator | Thursday 17 April 2025 00:27:39 +0000 (0:00:00.813) 0:00:06.777 ******** 2025-04-17 00:27:40.672048 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-17 00:27:40.673166 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-17 00:27:40.674109 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-17 00:27:40.677372 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-17 00:27:40.678232 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-17 00:27:40.679381 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-17 00:27:40.679740 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-17 00:27:40.680630 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-17 00:27:40.681401 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-17 00:27:40.682423 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-17 00:27:40.682616 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-17 00:27:40.683466 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-17 00:27:40.684220 | orchestrator | 2025-04-17 00:27:40.685120 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-17 00:27:40.685523 | orchestrator | Thursday 17 April 2025 00:27:40 +0000 (0:00:01.120) 0:00:07.898 ******** 2025-04-17 00:27:41.909736 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:41.910360 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:41.910418 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:41.910899 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:41.912087 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:41.912330 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:41.913478 | orchestrator | 2025-04-17 00:27:41.914086 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-17 00:27:41.914660 | orchestrator | Thursday 17 April 2025 00:27:41 +0000 (0:00:01.235) 0:00:09.134 ******** 2025-04-17 00:27:43.109672 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-17 00:27:43.109914 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-17 00:27:43.111504 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-17 00:27:43.132457 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-17 00:27:43.133894 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-17 00:27:43.134128 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-17 00:27:43.134679 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-17 00:27:43.135187 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-17 00:27:43.135562 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-17 00:27:43.136218 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-17 00:27:43.136743 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-17 00:27:43.137285 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-17 00:27:43.137942 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-17 00:27:43.138446 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-17 00:27:43.139000 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-17 00:27:43.139401 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-17 00:27:43.140320 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-17 00:27:43.140446 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-17 00:27:43.141292 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-17 00:27:43.141765 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-17 00:27:43.142087 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-17 00:27:43.145029 | orchestrator | 2025-04-17 00:27:43.145242 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-17 00:27:43.145689 | orchestrator | Thursday 17 April 2025 00:27:43 +0000 (0:00:01.226) 0:00:10.360 ******** 2025-04-17 00:27:43.691289 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:43.691747 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:43.692229 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:43.692770 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:43.693761 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:43.695383 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:43.695773 | orchestrator | 2025-04-17 00:27:43.696213 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-17 00:27:43.696687 | orchestrator | Thursday 17 April 2025 00:27:43 +0000 (0:00:00.556) 0:00:10.916 ******** 2025-04-17 00:27:43.756523 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:27:43.779691 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:27:43.802824 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:27:43.842860 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:27:43.843694 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:27:43.844098 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:27:43.844459 | orchestrator | 2025-04-17 00:27:43.844876 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-17 00:27:43.845481 | orchestrator | Thursday 17 April 2025 00:27:43 +0000 (0:00:00.153) 0:00:11.070 ******** 2025-04-17 00:27:44.523980 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-17 00:27:44.526193 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-17 00:27:44.526509 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-17 00:27:44.527103 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-17 00:27:44.527600 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:44.527979 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:44.528379 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:44.528745 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:44.529264 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-17 00:27:44.529611 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:44.530205 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-17 00:27:44.530676 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:44.531068 | orchestrator | 2025-04-17 00:27:44.531467 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-17 00:27:44.531744 | orchestrator | Thursday 17 April 2025 00:27:44 +0000 (0:00:00.678) 0:00:11.748 ******** 2025-04-17 00:27:44.562917 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:27:44.596135 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:27:44.621872 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:27:44.641311 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:27:44.670919 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:27:44.673358 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:27:44.676172 | orchestrator | 2025-04-17 00:27:44.677536 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-17 00:27:44.677691 | orchestrator | Thursday 17 April 2025 00:27:44 +0000 (0:00:00.148) 0:00:11.896 ******** 2025-04-17 00:27:44.716903 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:27:44.740645 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:27:44.762425 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:27:44.791410 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:27:44.822595 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:27:44.825253 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:27:44.825630 | orchestrator | 2025-04-17 00:27:44.825682 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-17 00:27:44.825717 | orchestrator | Thursday 17 April 2025 00:27:44 +0000 (0:00:00.151) 0:00:12.048 ******** 2025-04-17 00:27:44.871989 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:27:44.916406 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:27:44.941258 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:27:44.972119 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:27:44.972417 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:27:44.974282 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:27:44.974695 | orchestrator | 2025-04-17 00:27:44.976420 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-17 00:27:45.724658 | orchestrator | Thursday 17 April 2025 00:27:44 +0000 (0:00:00.150) 0:00:12.198 ******** 2025-04-17 00:27:45.724836 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:45.725145 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:45.726745 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:45.727705 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:45.730344 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:45.730677 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:45.732601 | orchestrator | 2025-04-17 00:27:45.733014 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-17 00:27:45.736758 | orchestrator | Thursday 17 April 2025 00:27:45 +0000 (0:00:00.750) 0:00:12.949 ******** 2025-04-17 00:27:45.812848 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:27:45.839562 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:27:45.942615 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:27:45.942984 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:27:45.943025 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:27:45.943646 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:27:45.944641 | orchestrator | 2025-04-17 00:27:45.945247 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:27:45.945699 | orchestrator | 2025-04-17 00:27:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:27:45.946123 | orchestrator | 2025-04-17 00:27:45 | INFO  | Please wait and do not abort execution. 2025-04-17 00:27:45.946875 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 00:27:45.947629 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 00:27:45.948288 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 00:27:45.949208 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 00:27:45.949398 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 00:27:45.950153 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 00:27:45.950513 | orchestrator | 2025-04-17 00:27:45.951126 | orchestrator | Thursday 17 April 2025 00:27:45 +0000 (0:00:00.220) 0:00:13.170 ******** 2025-04-17 00:27:45.951541 | orchestrator | =============================================================================== 2025-04-17 00:27:45.952129 | orchestrator | Gathering Facts --------------------------------------------------------- 4.20s 2025-04-17 00:27:45.952710 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2025-04-17 00:27:45.953151 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2025-04-17 00:27:45.953407 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2025-04-17 00:27:45.953879 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-04-17 00:27:45.954144 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-04-17 00:27:45.954540 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.75s 2025-04-17 00:27:45.955050 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2025-04-17 00:27:45.955349 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-04-17 00:27:45.955660 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-04-17 00:27:45.956073 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-04-17 00:27:45.956340 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2025-04-17 00:27:45.956715 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-04-17 00:27:45.957420 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-04-17 00:27:45.957774 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-04-17 00:27:45.957996 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-04-17 00:27:45.958392 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-04-17 00:27:46.346904 | orchestrator | + osism apply --environment custom facts 2025-04-17 00:27:47.709123 | orchestrator | 2025-04-17 00:27:47 | INFO  | Trying to run play facts in environment custom 2025-04-17 00:27:47.755910 | orchestrator | 2025-04-17 00:27:47 | INFO  | Task 2e17d73e-78fe-4d5b-962d-4377a068b16d (facts) was prepared for execution. 2025-04-17 00:27:50.651379 | orchestrator | 2025-04-17 00:27:47 | INFO  | It takes a moment until task 2e17d73e-78fe-4d5b-962d-4377a068b16d (facts) has been started and output is visible here. 2025-04-17 00:27:50.651622 | orchestrator | 2025-04-17 00:27:50.654353 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-17 00:27:50.654745 | orchestrator | 2025-04-17 00:27:50.657475 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-17 00:27:50.657938 | orchestrator | Thursday 17 April 2025 00:27:50 +0000 (0:00:00.078) 0:00:00.078 ******** 2025-04-17 00:27:51.832176 | orchestrator | ok: [testbed-manager] 2025-04-17 00:27:52.873242 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:52.873528 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:52.876967 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:52.877119 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:52.877723 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:52.878012 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:52.878278 | orchestrator | 2025-04-17 00:27:52.878621 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-17 00:27:52.879169 | orchestrator | Thursday 17 April 2025 00:27:52 +0000 (0:00:02.225) 0:00:02.303 ******** 2025-04-17 00:27:53.966757 | orchestrator | ok: [testbed-manager] 2025-04-17 00:27:54.817407 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:54.817775 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:27:54.820208 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:54.822783 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:54.823623 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:27:54.824614 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:27:54.825480 | orchestrator | 2025-04-17 00:27:54.827139 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-17 00:27:54.827314 | orchestrator | 2025-04-17 00:27:54.828159 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-17 00:27:54.828666 | orchestrator | Thursday 17 April 2025 00:27:54 +0000 (0:00:01.942) 0:00:04.246 ******** 2025-04-17 00:27:54.919083 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:54.919737 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:54.920755 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:54.923938 | orchestrator | 2025-04-17 00:27:54.924609 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-17 00:27:54.929205 | orchestrator | Thursday 17 April 2025 00:27:54 +0000 (0:00:00.103) 0:00:04.350 ******** 2025-04-17 00:27:55.055435 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:55.055735 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:55.055992 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:55.060048 | orchestrator | 2025-04-17 00:27:55.060210 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-17 00:27:55.174072 | orchestrator | Thursday 17 April 2025 00:27:55 +0000 (0:00:00.134) 0:00:04.484 ******** 2025-04-17 00:27:55.174241 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:55.176289 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:55.176902 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:55.176944 | orchestrator | 2025-04-17 00:27:55.176962 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-17 00:27:55.176984 | orchestrator | Thursday 17 April 2025 00:27:55 +0000 (0:00:00.117) 0:00:04.602 ******** 2025-04-17 00:27:55.318483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 00:27:55.319523 | orchestrator | 2025-04-17 00:27:55.322442 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-17 00:27:55.730821 | orchestrator | Thursday 17 April 2025 00:27:55 +0000 (0:00:00.145) 0:00:04.748 ******** 2025-04-17 00:27:55.731087 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:55.731187 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:55.731211 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:55.732095 | orchestrator | 2025-04-17 00:27:55.732706 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-17 00:27:55.733542 | orchestrator | Thursday 17 April 2025 00:27:55 +0000 (0:00:00.409) 0:00:05.157 ******** 2025-04-17 00:27:55.835029 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:27:55.835559 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:27:55.835660 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:27:55.835837 | orchestrator | 2025-04-17 00:27:55.837030 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-17 00:27:55.837456 | orchestrator | Thursday 17 April 2025 00:27:55 +0000 (0:00:00.107) 0:00:05.265 ******** 2025-04-17 00:27:56.777313 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:56.778409 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:56.779794 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:56.781047 | orchestrator | 2025-04-17 00:27:56.782170 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-17 00:27:56.783758 | orchestrator | Thursday 17 April 2025 00:27:56 +0000 (0:00:00.942) 0:00:06.207 ******** 2025-04-17 00:27:57.227070 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:27:57.227806 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:27:57.229090 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:27:57.230619 | orchestrator | 2025-04-17 00:27:57.231190 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-17 00:27:57.232128 | orchestrator | Thursday 17 April 2025 00:27:57 +0000 (0:00:00.449) 0:00:06.656 ******** 2025-04-17 00:27:58.219897 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:27:58.222353 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:27:58.222572 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:27:58.223205 | orchestrator | 2025-04-17 00:27:58.223875 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-17 00:27:58.224471 | orchestrator | Thursday 17 April 2025 00:27:58 +0000 (0:00:00.989) 0:00:07.646 ******** 2025-04-17 00:28:10.498457 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:10.498858 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:10.500372 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:10.500411 | orchestrator | 2025-04-17 00:28:10.500434 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-17 00:28:10.501135 | orchestrator | Thursday 17 April 2025 00:28:10 +0000 (0:00:12.277) 0:00:19.924 ******** 2025-04-17 00:28:10.572912 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:28:10.573140 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:28:10.573168 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:28:10.573212 | orchestrator | 2025-04-17 00:28:10.573383 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-17 00:28:10.573743 | orchestrator | Thursday 17 April 2025 00:28:10 +0000 (0:00:00.079) 0:00:20.004 ******** 2025-04-17 00:28:17.260216 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:17.260417 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:17.261750 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:17.262835 | orchestrator | 2025-04-17 00:28:17.263740 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-17 00:28:17.264794 | orchestrator | Thursday 17 April 2025 00:28:17 +0000 (0:00:06.683) 0:00:26.687 ******** 2025-04-17 00:28:17.658142 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:17.658324 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:17.659279 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:17.659631 | orchestrator | 2025-04-17 00:28:17.660387 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-17 00:28:17.661053 | orchestrator | Thursday 17 April 2025 00:28:17 +0000 (0:00:00.399) 0:00:27.086 ******** 2025-04-17 00:28:20.979738 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-17 00:28:20.981652 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-17 00:28:20.982302 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-17 00:28:20.982345 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-17 00:28:20.984996 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-17 00:28:20.985468 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-17 00:28:20.988165 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-17 00:28:20.988285 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-17 00:28:20.989221 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-17 00:28:20.989676 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-17 00:28:20.990137 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-17 00:28:20.990690 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-17 00:28:20.992177 | orchestrator | 2025-04-17 00:28:20.992566 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-17 00:28:20.993136 | orchestrator | Thursday 17 April 2025 00:28:20 +0000 (0:00:03.320) 0:00:30.407 ******** 2025-04-17 00:28:21.950221 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:21.950727 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:21.951002 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:21.952669 | orchestrator | 2025-04-17 00:28:21.953517 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-17 00:28:21.953869 | orchestrator | 2025-04-17 00:28:21.955058 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-17 00:28:21.955307 | orchestrator | Thursday 17 April 2025 00:28:21 +0000 (0:00:00.972) 0:00:31.380 ******** 2025-04-17 00:28:23.611931 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:26.745313 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:26.745676 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:26.745736 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:26.746084 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:26.747991 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:26.748237 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:26.748694 | orchestrator | 2025-04-17 00:28:26.749225 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:28:26.749463 | orchestrator | 2025-04-17 00:28:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:28:26.749583 | orchestrator | 2025-04-17 00:28:26 | INFO  | Please wait and do not abort execution. 2025-04-17 00:28:26.750168 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:28:26.750543 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:28:26.750931 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:28:26.751311 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:28:26.751759 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:28:26.752057 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:28:26.752343 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:28:26.752783 | orchestrator | 2025-04-17 00:28:26.753156 | orchestrator | Thursday 17 April 2025 00:28:26 +0000 (0:00:04.795) 0:00:36.175 ******** 2025-04-17 00:28:26.753526 | orchestrator | =============================================================================== 2025-04-17 00:28:26.753885 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.28s 2025-04-17 00:28:26.754334 | orchestrator | Install required packages (Debian) -------------------------------------- 6.68s 2025-04-17 00:28:26.754713 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.80s 2025-04-17 00:28:26.755016 | orchestrator | Copy fact files --------------------------------------------------------- 3.32s 2025-04-17 00:28:26.755563 | orchestrator | Create custom facts directory ------------------------------------------- 2.23s 2025-04-17 00:28:26.756120 | orchestrator | Copy fact file ---------------------------------------------------------- 1.94s 2025-04-17 00:28:26.756546 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.99s 2025-04-17 00:28:26.756956 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 0.97s 2025-04-17 00:28:26.757311 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.94s 2025-04-17 00:28:26.757458 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-04-17 00:28:26.757824 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-04-17 00:28:26.758179 | orchestrator | Create custom facts directory ------------------------------------------- 0.40s 2025-04-17 00:28:26.758685 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-04-17 00:28:26.759393 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.13s 2025-04-17 00:28:26.759790 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.12s 2025-04-17 00:28:26.760243 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-04-17 00:28:26.760561 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-04-17 00:28:26.760938 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2025-04-17 00:28:27.134057 | orchestrator | + osism apply bootstrap 2025-04-17 00:28:28.486192 | orchestrator | 2025-04-17 00:28:28 | INFO  | Task 3ea6e33e-0db8-4a06-b882-c9b576533513 (bootstrap) was prepared for execution. 2025-04-17 00:28:31.552901 | orchestrator | 2025-04-17 00:28:28 | INFO  | It takes a moment until task 3ea6e33e-0db8-4a06-b882-c9b576533513 (bootstrap) has been started and output is visible here. 2025-04-17 00:28:31.553065 | orchestrator | 2025-04-17 00:28:31.554433 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-17 00:28:31.554976 | orchestrator | 2025-04-17 00:28:31.557017 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-17 00:28:31.557687 | orchestrator | Thursday 17 April 2025 00:28:31 +0000 (0:00:00.112) 0:00:00.112 ******** 2025-04-17 00:28:31.626979 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:31.653562 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:31.680751 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:31.707509 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:31.807967 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:31.808165 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:31.808709 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:31.809465 | orchestrator | 2025-04-17 00:28:31.809807 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-17 00:28:31.811012 | orchestrator | 2025-04-17 00:28:31.812302 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-17 00:28:35.304255 | orchestrator | Thursday 17 April 2025 00:28:31 +0000 (0:00:00.257) 0:00:00.370 ******** 2025-04-17 00:28:35.304441 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:35.304516 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:35.305298 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:35.305928 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:35.307746 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:35.308214 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:35.308245 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:35.308560 | orchestrator | 2025-04-17 00:28:35.308762 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-17 00:28:35.309303 | orchestrator | 2025-04-17 00:28:35.309583 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-17 00:28:35.309973 | orchestrator | Thursday 17 April 2025 00:28:35 +0000 (0:00:03.496) 0:00:03.867 ******** 2025-04-17 00:28:35.395503 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-17 00:28:35.437803 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-17 00:28:35.437938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-17 00:28:35.437974 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-17 00:28:35.438103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 00:28:35.438522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 00:28:35.438950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-17 00:28:35.439125 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-17 00:28:35.444042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 00:28:35.481521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 00:28:35.481744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 00:28:35.482212 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-17 00:28:35.484066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 00:28:35.485541 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-17 00:28:35.485577 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-17 00:28:35.485641 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 00:28:35.485669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 00:28:35.718750 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:28:35.718918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-17 00:28:35.722365 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-17 00:28:35.723274 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:35.723307 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 00:28:35.723324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 00:28:35.723346 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 00:28:35.725114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 00:28:35.725148 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-17 00:28:35.725679 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 00:28:35.726538 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 00:28:35.727697 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-17 00:28:35.728182 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-17 00:28:35.729017 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 00:28:35.729440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 00:28:35.730562 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-17 00:28:35.731147 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 00:28:35.731711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-17 00:28:35.733072 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-17 00:28:35.734211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 00:28:35.734667 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 00:28:35.734702 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:28:35.735710 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-17 00:28:35.736306 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-17 00:28:35.736740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 00:28:35.737508 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-17 00:28:35.738270 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 00:28:35.738645 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-17 00:28:35.739106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 00:28:35.739637 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:28:35.740004 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 00:28:35.740332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-17 00:28:35.740713 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:28:35.741296 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-17 00:28:35.741553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 00:28:35.741976 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:28:35.746207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-17 00:28:35.747428 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-17 00:28:35.747916 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:28:35.747955 | orchestrator | 2025-04-17 00:28:35.748449 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-17 00:28:35.749197 | orchestrator | 2025-04-17 00:28:35.749694 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-04-17 00:28:35.752429 | orchestrator | Thursday 17 April 2025 00:28:35 +0000 (0:00:00.412) 0:00:04.279 ******** 2025-04-17 00:28:35.788880 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:35.813381 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:35.836434 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:35.860008 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:35.912188 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:35.912413 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:35.913427 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:35.913984 | orchestrator | 2025-04-17 00:28:35.914285 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-17 00:28:35.914459 | orchestrator | Thursday 17 April 2025 00:28:35 +0000 (0:00:00.195) 0:00:04.475 ******** 2025-04-17 00:28:37.083515 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:37.084352 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:37.084398 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:37.085803 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:37.085841 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:37.086362 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:37.087290 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:37.087859 | orchestrator | 2025-04-17 00:28:37.089330 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-17 00:28:37.089563 | orchestrator | Thursday 17 April 2025 00:28:37 +0000 (0:00:01.170) 0:00:05.645 ******** 2025-04-17 00:28:38.265682 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:38.265886 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:38.266782 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:38.267989 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:38.268440 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:38.269399 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:38.270441 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:38.271431 | orchestrator | 2025-04-17 00:28:38.272097 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-17 00:28:38.272443 | orchestrator | Thursday 17 April 2025 00:28:38 +0000 (0:00:01.180) 0:00:06.826 ******** 2025-04-17 00:28:38.533446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:28:38.533751 | orchestrator | 2025-04-17 00:28:38.533788 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-17 00:28:40.507656 | orchestrator | Thursday 17 April 2025 00:28:38 +0000 (0:00:00.270) 0:00:07.096 ******** 2025-04-17 00:28:40.508569 | orchestrator | changed: [testbed-manager] 2025-04-17 00:28:40.509961 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:40.509996 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:40.513260 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:40.513704 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:40.513729 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:40.513749 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:40.514687 | orchestrator | 2025-04-17 00:28:40.515191 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-17 00:28:40.516455 | orchestrator | Thursday 17 April 2025 00:28:40 +0000 (0:00:01.971) 0:00:09.067 ******** 2025-04-17 00:28:40.575751 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:40.734879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:28:40.735745 | orchestrator | 2025-04-17 00:28:40.736494 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-17 00:28:40.739243 | orchestrator | Thursday 17 April 2025 00:28:40 +0000 (0:00:00.228) 0:00:09.296 ******** 2025-04-17 00:28:41.672876 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:41.675079 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:41.675333 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:41.676413 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:41.677188 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:41.678126 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:41.678721 | orchestrator | 2025-04-17 00:28:41.679727 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-17 00:28:41.680280 | orchestrator | Thursday 17 April 2025 00:28:41 +0000 (0:00:00.938) 0:00:10.234 ******** 2025-04-17 00:28:41.738811 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:42.244767 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:42.244964 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:42.246781 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:42.247680 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:42.248558 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:42.249367 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:42.250472 | orchestrator | 2025-04-17 00:28:42.251729 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-17 00:28:42.253272 | orchestrator | Thursday 17 April 2025 00:28:42 +0000 (0:00:00.572) 0:00:10.806 ******** 2025-04-17 00:28:42.341432 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:28:42.363043 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:28:42.384560 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:28:42.680070 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:28:42.681402 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:28:42.682999 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:28:42.683791 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:42.684864 | orchestrator | 2025-04-17 00:28:42.685892 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-17 00:28:42.686447 | orchestrator | Thursday 17 April 2025 00:28:42 +0000 (0:00:00.432) 0:00:11.239 ******** 2025-04-17 00:28:42.746915 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:42.771273 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:28:42.792933 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:28:42.820259 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:28:42.885292 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:28:42.885498 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:28:42.887142 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:28:42.887563 | orchestrator | 2025-04-17 00:28:42.888908 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-17 00:28:42.889360 | orchestrator | Thursday 17 April 2025 00:28:42 +0000 (0:00:00.207) 0:00:11.447 ******** 2025-04-17 00:28:43.175037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:28:43.176101 | orchestrator | 2025-04-17 00:28:43.177036 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-17 00:28:43.178117 | orchestrator | Thursday 17 April 2025 00:28:43 +0000 (0:00:00.289) 0:00:11.736 ******** 2025-04-17 00:28:43.470320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:28:43.470798 | orchestrator | 2025-04-17 00:28:43.471990 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-17 00:28:43.475820 | orchestrator | Thursday 17 April 2025 00:28:43 +0000 (0:00:00.295) 0:00:12.032 ******** 2025-04-17 00:28:44.671816 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:44.672579 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:44.673556 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:44.675636 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:44.676055 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:44.676547 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:44.677881 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:44.678722 | orchestrator | 2025-04-17 00:28:44.679204 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-17 00:28:44.679899 | orchestrator | Thursday 17 April 2025 00:28:44 +0000 (0:00:01.199) 0:00:13.231 ******** 2025-04-17 00:28:44.742188 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:44.767472 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:28:44.789479 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:28:44.815715 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:28:44.873428 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:28:44.874775 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:28:44.875288 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:28:44.876331 | orchestrator | 2025-04-17 00:28:44.877004 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-17 00:28:44.877650 | orchestrator | Thursday 17 April 2025 00:28:44 +0000 (0:00:00.203) 0:00:13.435 ******** 2025-04-17 00:28:45.396258 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:45.396400 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:45.396674 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:45.400103 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:45.402073 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:45.403944 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:45.403954 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:45.403961 | orchestrator | 2025-04-17 00:28:45.403969 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-17 00:28:45.404106 | orchestrator | Thursday 17 April 2025 00:28:45 +0000 (0:00:00.522) 0:00:13.957 ******** 2025-04-17 00:28:45.524765 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:45.548423 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:28:45.573921 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:28:45.645072 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:28:45.647100 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:28:45.648557 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:28:45.653277 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:28:45.655094 | orchestrator | 2025-04-17 00:28:45.655130 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-17 00:28:46.202695 | orchestrator | Thursday 17 April 2025 00:28:45 +0000 (0:00:00.248) 0:00:14.206 ******** 2025-04-17 00:28:46.202841 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:46.205508 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:46.206577 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:46.206659 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:46.206684 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:46.207338 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:46.208099 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:46.208661 | orchestrator | 2025-04-17 00:28:46.209191 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-17 00:28:46.209884 | orchestrator | Thursday 17 April 2025 00:28:46 +0000 (0:00:00.556) 0:00:14.763 ******** 2025-04-17 00:28:47.249470 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:47.250102 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:47.250147 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:47.250244 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:47.250453 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:47.253638 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:47.253750 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:47.254077 | orchestrator | 2025-04-17 00:28:47.254437 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-17 00:28:47.254838 | orchestrator | Thursday 17 April 2025 00:28:47 +0000 (0:00:01.047) 0:00:15.810 ******** 2025-04-17 00:28:48.347947 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:48.348518 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:48.349357 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:48.350076 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:48.350677 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:48.351700 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:48.352150 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:48.353123 | orchestrator | 2025-04-17 00:28:48.353372 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-17 00:28:48.354107 | orchestrator | Thursday 17 April 2025 00:28:48 +0000 (0:00:01.097) 0:00:16.908 ******** 2025-04-17 00:28:48.637672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:28:48.637903 | orchestrator | 2025-04-17 00:28:48.638410 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-17 00:28:48.641714 | orchestrator | Thursday 17 April 2025 00:28:48 +0000 (0:00:00.289) 0:00:17.197 ******** 2025-04-17 00:28:48.709429 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:50.058961 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:28:50.060111 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:50.060950 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:28:50.065030 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:50.065438 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:50.065882 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:28:50.066489 | orchestrator | 2025-04-17 00:28:50.067014 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-17 00:28:50.067657 | orchestrator | Thursday 17 April 2025 00:28:50 +0000 (0:00:01.423) 0:00:18.620 ******** 2025-04-17 00:28:50.133418 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:50.163084 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:50.186898 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:50.213195 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:50.282255 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:50.282817 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:50.282859 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:50.283337 | orchestrator | 2025-04-17 00:28:50.286301 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-17 00:28:50.355819 | orchestrator | Thursday 17 April 2025 00:28:50 +0000 (0:00:00.222) 0:00:18.843 ******** 2025-04-17 00:28:50.355898 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:50.379484 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:50.399486 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:50.422704 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:50.482174 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:50.483479 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:50.487070 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:50.563196 | orchestrator | 2025-04-17 00:28:50.563283 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-17 00:28:50.563299 | orchestrator | Thursday 17 April 2025 00:28:50 +0000 (0:00:00.201) 0:00:19.044 ******** 2025-04-17 00:28:50.563328 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:50.590153 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:50.615460 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:50.644883 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:50.700016 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:50.700674 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:50.701697 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:50.704102 | orchestrator | 2025-04-17 00:28:50.957375 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-17 00:28:50.957497 | orchestrator | Thursday 17 April 2025 00:28:50 +0000 (0:00:00.217) 0:00:19.262 ******** 2025-04-17 00:28:50.957533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:28:50.957664 | orchestrator | 2025-04-17 00:28:50.960825 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-17 00:28:50.961785 | orchestrator | Thursday 17 April 2025 00:28:50 +0000 (0:00:00.253) 0:00:19.515 ******** 2025-04-17 00:28:51.514727 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:51.516303 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:51.516646 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:51.517802 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:51.518176 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:51.518872 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:51.519506 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:51.520140 | orchestrator | 2025-04-17 00:28:51.520738 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-17 00:28:51.521285 | orchestrator | Thursday 17 April 2025 00:28:51 +0000 (0:00:00.558) 0:00:20.074 ******** 2025-04-17 00:28:51.583862 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:28:51.610089 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:28:51.627784 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:28:51.653043 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:28:51.710683 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:28:51.710838 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:28:51.711770 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:28:51.713006 | orchestrator | 2025-04-17 00:28:51.713912 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-17 00:28:51.714643 | orchestrator | Thursday 17 April 2025 00:28:51 +0000 (0:00:00.196) 0:00:20.271 ******** 2025-04-17 00:28:52.744941 | orchestrator | changed: [testbed-manager] 2025-04-17 00:28:52.745123 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:52.746287 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:52.746822 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:52.747678 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:52.748677 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:52.749298 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:52.750070 | orchestrator | 2025-04-17 00:28:52.750761 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-17 00:28:52.751488 | orchestrator | Thursday 17 April 2025 00:28:52 +0000 (0:00:01.034) 0:00:21.305 ******** 2025-04-17 00:28:53.286184 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:53.286362 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:53.286651 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:53.286681 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:53.286701 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:28:53.287812 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:28:53.288491 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:28:53.289415 | orchestrator | 2025-04-17 00:28:53.290327 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-17 00:28:53.290932 | orchestrator | Thursday 17 April 2025 00:28:53 +0000 (0:00:00.537) 0:00:21.843 ******** 2025-04-17 00:28:54.381208 | orchestrator | ok: [testbed-manager] 2025-04-17 00:28:54.381495 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:28:54.382142 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:28:54.383095 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:28:54.383408 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:28:54.384380 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:28:54.385452 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:28:54.385564 | orchestrator | 2025-04-17 00:28:54.386442 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-17 00:28:54.386965 | orchestrator | Thursday 17 April 2025 00:28:54 +0000 (0:00:01.096) 0:00:22.940 ******** 2025-04-17 00:29:07.716467 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:07.716757 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:07.716785 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:07.716807 | orchestrator | changed: [testbed-manager] 2025-04-17 00:29:07.717513 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:29:07.719794 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:29:07.720357 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:29:07.721539 | orchestrator | 2025-04-17 00:29:07.722888 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-17 00:29:07.723705 | orchestrator | Thursday 17 April 2025 00:29:07 +0000 (0:00:13.329) 0:00:36.270 ******** 2025-04-17 00:29:07.767931 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:07.816466 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:07.840199 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:07.862537 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:07.916546 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:07.917207 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:07.918131 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:07.920603 | orchestrator | 2025-04-17 00:29:07.921680 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-17 00:29:07.922517 | orchestrator | Thursday 17 April 2025 00:29:07 +0000 (0:00:00.208) 0:00:36.478 ******** 2025-04-17 00:29:07.994073 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:08.011699 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:08.041057 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:08.061155 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:08.116107 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:08.116890 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:08.116952 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:08.118331 | orchestrator | 2025-04-17 00:29:08.118801 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-17 00:29:08.119298 | orchestrator | Thursday 17 April 2025 00:29:08 +0000 (0:00:00.199) 0:00:36.678 ******** 2025-04-17 00:29:08.199013 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:08.223535 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:08.248703 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:08.271965 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:08.354889 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:08.355919 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:08.356989 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:08.358136 | orchestrator | 2025-04-17 00:29:08.358783 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-17 00:29:08.361301 | orchestrator | Thursday 17 April 2025 00:29:08 +0000 (0:00:00.237) 0:00:36.915 ******** 2025-04-17 00:29:08.663083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:29:08.663563 | orchestrator | 2025-04-17 00:29:08.670970 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-17 00:29:08.673340 | orchestrator | Thursday 17 April 2025 00:29:08 +0000 (0:00:00.309) 0:00:37.224 ******** 2025-04-17 00:29:10.069839 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:10.071284 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:10.071744 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:10.072498 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:10.074878 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:10.075550 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:10.076489 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:10.077051 | orchestrator | 2025-04-17 00:29:10.078690 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-17 00:29:11.123809 | orchestrator | Thursday 17 April 2025 00:29:10 +0000 (0:00:01.405) 0:00:38.630 ******** 2025-04-17 00:29:11.123987 | orchestrator | changed: [testbed-manager] 2025-04-17 00:29:11.124081 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:29:11.124099 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:29:11.127273 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:29:11.127779 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:29:11.128487 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:29:11.129071 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:29:11.130331 | orchestrator | 2025-04-17 00:29:11.130707 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-17 00:29:11.131240 | orchestrator | Thursday 17 April 2025 00:29:11 +0000 (0:00:01.051) 0:00:39.682 ******** 2025-04-17 00:29:11.897715 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:11.897948 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:11.898498 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:11.899482 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:11.900051 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:11.901030 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:11.901676 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:11.902424 | orchestrator | 2025-04-17 00:29:11.903156 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-17 00:29:11.903656 | orchestrator | Thursday 17 April 2025 00:29:11 +0000 (0:00:00.775) 0:00:40.457 ******** 2025-04-17 00:29:12.179921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:29:12.180434 | orchestrator | 2025-04-17 00:29:12.181584 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-17 00:29:12.182503 | orchestrator | Thursday 17 April 2025 00:29:12 +0000 (0:00:00.283) 0:00:40.741 ******** 2025-04-17 00:29:13.131070 | orchestrator | changed: [testbed-manager] 2025-04-17 00:29:13.131242 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:29:13.131989 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:29:13.133078 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:29:13.134263 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:29:13.134414 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:29:13.135461 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:29:13.136024 | orchestrator | 2025-04-17 00:29:13.136749 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-17 00:29:13.137335 | orchestrator | Thursday 17 April 2025 00:29:13 +0000 (0:00:00.950) 0:00:41.692 ******** 2025-04-17 00:29:13.199789 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:29:13.227953 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:29:13.263437 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:29:13.286837 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:29:13.427555 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:29:13.428077 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:29:13.428122 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:29:13.428643 | orchestrator | 2025-04-17 00:29:13.429160 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-17 00:29:13.430072 | orchestrator | Thursday 17 April 2025 00:29:13 +0000 (0:00:00.296) 0:00:41.988 ******** 2025-04-17 00:29:24.675107 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:29:24.676231 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:29:24.676269 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:29:24.676285 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:29:24.676308 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:29:24.677364 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:29:24.677888 | orchestrator | changed: [testbed-manager] 2025-04-17 00:29:24.679744 | orchestrator | 2025-04-17 00:29:24.680326 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-17 00:29:24.680707 | orchestrator | Thursday 17 April 2025 00:29:24 +0000 (0:00:11.242) 0:00:53.231 ******** 2025-04-17 00:29:25.654321 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:25.654549 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:25.658196 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:25.658777 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:25.658811 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:25.658827 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:25.658850 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:25.659658 | orchestrator | 2025-04-17 00:29:25.660227 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-17 00:29:25.660670 | orchestrator | Thursday 17 April 2025 00:29:25 +0000 (0:00:00.984) 0:00:54.216 ******** 2025-04-17 00:29:26.532914 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:26.533150 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:26.534514 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:26.534994 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:26.535910 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:26.536860 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:26.537852 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:26.538172 | orchestrator | 2025-04-17 00:29:26.538778 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-17 00:29:26.539424 | orchestrator | Thursday 17 April 2025 00:29:26 +0000 (0:00:00.877) 0:00:55.093 ******** 2025-04-17 00:29:26.605933 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:26.632757 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:26.658174 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:26.686840 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:26.771196 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:26.771707 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:26.772700 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:26.773606 | orchestrator | 2025-04-17 00:29:26.774233 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-17 00:29:26.774916 | orchestrator | Thursday 17 April 2025 00:29:26 +0000 (0:00:00.239) 0:00:55.333 ******** 2025-04-17 00:29:26.855683 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:26.888518 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:26.908286 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:26.931888 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:26.998832 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:26.999749 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:27.001068 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:27.002682 | orchestrator | 2025-04-17 00:29:27.003296 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-17 00:29:27.004528 | orchestrator | Thursday 17 April 2025 00:29:26 +0000 (0:00:00.226) 0:00:55.560 ******** 2025-04-17 00:29:27.284818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:29:27.285229 | orchestrator | 2025-04-17 00:29:27.286289 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-17 00:29:27.289886 | orchestrator | Thursday 17 April 2025 00:29:27 +0000 (0:00:00.286) 0:00:55.846 ******** 2025-04-17 00:29:28.825146 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:28.826255 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:28.828296 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:28.829491 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:28.830697 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:28.832314 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:28.832523 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:28.833421 | orchestrator | 2025-04-17 00:29:28.834786 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-17 00:29:28.835471 | orchestrator | Thursday 17 April 2025 00:29:28 +0000 (0:00:01.539) 0:00:57.385 ******** 2025-04-17 00:29:29.418578 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:29:29.419784 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:29:29.419835 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:29:29.420707 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:29:29.421329 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:29:29.422296 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:29:29.422699 | orchestrator | changed: [testbed-manager] 2025-04-17 00:29:29.423352 | orchestrator | 2025-04-17 00:29:29.424024 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-17 00:29:29.424575 | orchestrator | Thursday 17 April 2025 00:29:29 +0000 (0:00:00.593) 0:00:57.979 ******** 2025-04-17 00:29:29.490933 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:29.516804 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:29.551833 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:29.572222 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:29.625992 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:29.626673 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:29.627246 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:29.629057 | orchestrator | 2025-04-17 00:29:29.629404 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-17 00:29:29.630917 | orchestrator | Thursday 17 April 2025 00:29:29 +0000 (0:00:00.208) 0:00:58.188 ******** 2025-04-17 00:29:30.727063 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:30.727361 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:30.727672 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:30.729144 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:30.730299 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:30.730761 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:30.731628 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:30.732330 | orchestrator | 2025-04-17 00:29:30.732863 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-17 00:29:30.733361 | orchestrator | Thursday 17 April 2025 00:29:30 +0000 (0:00:01.098) 0:00:59.287 ******** 2025-04-17 00:29:45.842703 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:29:45.843122 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:29:45.843159 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:29:45.843182 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:29:45.843705 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:29:45.844306 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:29:45.844717 | orchestrator | ok: [testbed-manager] 2025-04-17 00:29:45.846655 | orchestrator | 2025-04-17 00:29:45.847167 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-17 00:29:45.848079 | orchestrator | Thursday 17 April 2025 00:29:45 +0000 (0:00:15.112) 0:01:14.399 ******** 2025-04-17 00:30:47.872334 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:30:47.873322 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:30:47.873347 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:30:47.873361 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:30:47.874410 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:30:47.874428 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:30:47.874442 | orchestrator | changed: [testbed-manager] 2025-04-17 00:30:47.880526 | orchestrator | 2025-04-17 00:30:47.880614 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-17 00:31:27.317459 | orchestrator | Thursday 17 April 2025 00:30:47 +0000 (0:01:02.029) 0:02:16.428 ******** 2025-04-17 00:31:27.317639 | orchestrator | ok: [testbed-manager] 2025-04-17 00:31:27.317712 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:31:27.317720 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:31:27.317725 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:31:27.317730 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:31:27.317737 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:32:48.087569 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:32:48.087843 | orchestrator | 2025-04-17 00:32:48.087873 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-17 00:32:48.087890 | orchestrator | Thursday 17 April 2025 00:31:27 +0000 (0:00:39.443) 0:02:55.872 ******** 2025-04-17 00:32:48.087924 | orchestrator | changed: [testbed-manager] 2025-04-17 00:32:48.088007 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:32:48.088030 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:32:48.089439 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:32:48.089474 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:32:48.089959 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:32:48.090592 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:32:48.090980 | orchestrator | 2025-04-17 00:32:48.091600 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-17 00:32:48.092249 | orchestrator | Thursday 17 April 2025 00:32:48 +0000 (0:01:20.773) 0:04:16.645 ******** 2025-04-17 00:32:49.677193 | orchestrator | ok: [testbed-manager] 2025-04-17 00:32:49.677463 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:32:49.678751 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:32:49.680135 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:32:49.681909 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:32:49.683324 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:32:49.683826 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:32:49.685815 | orchestrator | 2025-04-17 00:32:49.686436 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-17 00:32:49.687381 | orchestrator | Thursday 17 April 2025 00:32:49 +0000 (0:00:01.590) 0:04:18.236 ******** 2025-04-17 00:33:08.156416 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:08.156809 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:08.156977 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:08.157016 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:08.158962 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:08.162296 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:08.162867 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:08.162897 | orchestrator | 2025-04-17 00:33:08.162921 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-17 00:33:08.164085 | orchestrator | Thursday 17 April 2025 00:33:08 +0000 (0:00:18.473) 0:04:36.710 ******** 2025-04-17 00:33:08.506158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-17 00:33:08.507072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-17 00:33:08.507178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-17 00:33:08.507783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-17 00:33:08.508376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-17 00:33:08.511395 | orchestrator | 2025-04-17 00:33:08.561449 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-17 00:33:08.561596 | orchestrator | Thursday 17 April 2025 00:33:08 +0000 (0:00:00.357) 0:04:37.067 ******** 2025-04-17 00:33:08.561643 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-17 00:33:08.589317 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:08.589575 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-17 00:33:08.616701 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:33:08.659593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-17 00:33:08.659790 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:33:08.680270 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-17 00:33:08.680397 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:33:09.175996 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-17 00:33:09.227075 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-17 00:33:09.227212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-17 00:33:09.227230 | orchestrator | 2025-04-17 00:33:09.227288 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-17 00:33:09.227308 | orchestrator | Thursday 17 April 2025 00:33:09 +0000 (0:00:00.669) 0:04:37.736 ******** 2025-04-17 00:33:09.227341 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-17 00:33:09.227417 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-17 00:33:09.227818 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-17 00:33:09.228326 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-17 00:33:09.228615 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-17 00:33:09.228906 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-17 00:33:09.263398 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-17 00:33:09.263608 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-17 00:33:09.264889 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-17 00:33:09.265199 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-17 00:33:09.265545 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-17 00:33:09.268058 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-17 00:33:09.268902 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-17 00:33:09.268929 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-17 00:33:09.268951 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-17 00:33:09.269264 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-17 00:33:09.269488 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-17 00:33:09.274323 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-17 00:33:09.308636 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-17 00:33:09.308885 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:09.309223 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-17 00:33:09.309965 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-17 00:33:09.310182 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-17 00:33:09.310884 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-17 00:33:09.311097 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-17 00:33:09.311449 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-17 00:33:09.311930 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-17 00:33:09.312403 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-17 00:33:09.312814 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-17 00:33:09.313116 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-17 00:33:09.313611 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-17 00:33:09.314100 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-17 00:33:09.314467 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-17 00:33:09.314664 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-17 00:33:09.346941 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:33:09.347150 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-17 00:33:09.347174 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-17 00:33:09.347195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-17 00:33:09.382966 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-17 00:33:09.383098 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-17 00:33:09.383133 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-17 00:33:09.383232 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:33:09.383405 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-17 00:33:14.814405 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:33:14.815145 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-17 00:33:14.815905 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-17 00:33:14.817598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-17 00:33:14.818264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-17 00:33:14.819121 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-17 00:33:14.819897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-17 00:33:14.820865 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-17 00:33:14.821564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-17 00:33:14.822651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-17 00:33:14.823458 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-17 00:33:14.824149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-17 00:33:14.824905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-17 00:33:14.825585 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-17 00:33:14.826229 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-17 00:33:14.827026 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-17 00:33:14.828240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-17 00:33:14.828500 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-17 00:33:14.828786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-17 00:33:14.829264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-17 00:33:14.829679 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-17 00:33:14.830159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-17 00:33:14.830412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-17 00:33:14.830870 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-17 00:33:14.831328 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-17 00:33:14.831809 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-17 00:33:14.832269 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-17 00:33:14.832664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-17 00:33:14.833083 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-17 00:33:14.833486 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-17 00:33:14.833875 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-17 00:33:14.834331 | orchestrator | 2025-04-17 00:33:14.834796 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-17 00:33:14.835101 | orchestrator | Thursday 17 April 2025 00:33:14 +0000 (0:00:05.638) 0:04:43.374 ******** 2025-04-17 00:33:15.361704 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-17 00:33:15.362304 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-17 00:33:15.363748 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-17 00:33:15.364801 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-17 00:33:15.365535 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-17 00:33:15.366550 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-17 00:33:15.368325 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-17 00:33:15.369217 | orchestrator | 2025-04-17 00:33:15.370221 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-17 00:33:15.371338 | orchestrator | Thursday 17 April 2025 00:33:15 +0000 (0:00:00.546) 0:04:43.921 ******** 2025-04-17 00:33:15.426882 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-17 00:33:15.444434 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:15.523319 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-17 00:33:15.859310 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-17 00:33:15.859481 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:33:15.859884 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:33:15.860266 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-17 00:33:15.860825 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:33:15.862158 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-17 00:33:15.862634 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-17 00:33:15.863209 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-17 00:33:15.863629 | orchestrator | 2025-04-17 00:33:15.864084 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-17 00:33:15.864778 | orchestrator | Thursday 17 April 2025 00:33:15 +0000 (0:00:00.499) 0:04:44.421 ******** 2025-04-17 00:33:15.922357 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-17 00:33:15.949055 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:16.042912 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-17 00:33:16.427864 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-17 00:33:16.428067 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:33:16.428210 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:33:16.428244 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-17 00:33:16.428580 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:33:16.428611 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-17 00:33:16.428638 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-17 00:33:16.429152 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-17 00:33:16.429521 | orchestrator | 2025-04-17 00:33:16.429831 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-17 00:33:16.430098 | orchestrator | Thursday 17 April 2025 00:33:16 +0000 (0:00:00.568) 0:04:44.989 ******** 2025-04-17 00:33:16.487055 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:16.538246 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:33:16.564251 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:33:16.591936 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:33:16.739218 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:33:16.740063 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:33:16.743199 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:33:16.743496 | orchestrator | 2025-04-17 00:33:22.394905 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-17 00:33:22.395044 | orchestrator | Thursday 17 April 2025 00:33:16 +0000 (0:00:00.311) 0:04:45.301 ******** 2025-04-17 00:33:22.395072 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:22.395323 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:22.395906 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:22.396392 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:22.400519 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:22.401109 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:22.401695 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:22.402201 | orchestrator | 2025-04-17 00:33:22.402677 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-17 00:33:22.403291 | orchestrator | Thursday 17 April 2025 00:33:22 +0000 (0:00:05.655) 0:04:50.956 ******** 2025-04-17 00:33:22.469000 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-17 00:33:22.469174 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-17 00:33:22.498277 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:22.538012 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:33:22.577841 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-17 00:33:22.578175 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:33:22.578320 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-17 00:33:22.611702 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:33:22.612778 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-17 00:33:22.612821 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-17 00:33:22.679956 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:33:22.680159 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:33:22.681495 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-17 00:33:22.681933 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:33:22.682351 | orchestrator | 2025-04-17 00:33:22.682710 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-17 00:33:22.683190 | orchestrator | Thursday 17 April 2025 00:33:22 +0000 (0:00:00.285) 0:04:51.241 ******** 2025-04-17 00:33:23.737026 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-17 00:33:23.737239 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-17 00:33:23.737938 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-17 00:33:23.738871 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-17 00:33:23.739541 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-17 00:33:23.743345 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-17 00:33:23.743676 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-17 00:33:23.744278 | orchestrator | 2025-04-17 00:33:23.745216 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-17 00:33:23.746345 | orchestrator | Thursday 17 April 2025 00:33:23 +0000 (0:00:01.052) 0:04:52.293 ******** 2025-04-17 00:33:24.247955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:33:24.248176 | orchestrator | 2025-04-17 00:33:24.251923 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-17 00:33:24.252552 | orchestrator | Thursday 17 April 2025 00:33:24 +0000 (0:00:00.514) 0:04:52.807 ******** 2025-04-17 00:33:25.392621 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:25.392920 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:25.392963 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:25.392987 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:25.393334 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:25.394282 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:25.394883 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:25.395389 | orchestrator | 2025-04-17 00:33:25.396181 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-17 00:33:25.396598 | orchestrator | Thursday 17 April 2025 00:33:25 +0000 (0:00:01.144) 0:04:53.952 ******** 2025-04-17 00:33:25.974890 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:25.975162 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:25.976193 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:25.977224 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:25.977833 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:25.978818 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:25.979167 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:25.979564 | orchestrator | 2025-04-17 00:33:25.980151 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-17 00:33:25.980616 | orchestrator | Thursday 17 April 2025 00:33:25 +0000 (0:00:00.583) 0:04:54.535 ******** 2025-04-17 00:33:26.578632 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:26.579614 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:26.580985 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:26.581940 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:26.583140 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:26.584647 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:26.585389 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:26.586419 | orchestrator | 2025-04-17 00:33:26.587095 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-17 00:33:26.588107 | orchestrator | Thursday 17 April 2025 00:33:26 +0000 (0:00:00.603) 0:04:55.138 ******** 2025-04-17 00:33:27.139845 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:27.140271 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:27.140884 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:27.141407 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:27.141439 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:27.142413 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:27.142860 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:27.143450 | orchestrator | 2025-04-17 00:33:27.144149 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-17 00:33:27.144608 | orchestrator | Thursday 17 April 2025 00:33:27 +0000 (0:00:00.562) 0:04:55.701 ******** 2025-04-17 00:33:28.029024 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744848255.851923, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.030321 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744848279.8420284, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.030412 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744848260.3626719, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.030916 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744848278.8350284, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.031015 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744848277.4941692, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.031514 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744848271.461018, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.031753 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744848271.1197124, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.032307 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744848281.1662729, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.032920 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744848195.622079, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.033059 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744848208.1591804, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.033672 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744848208.360932, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.034167 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744848205.6968296, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.034317 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744848211.0393176, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.034857 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744848201.9606535, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-17 00:33:28.035134 | orchestrator | 2025-04-17 00:33:28.035629 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-17 00:33:28.035920 | orchestrator | Thursday 17 April 2025 00:33:28 +0000 (0:00:00.888) 0:04:56.590 ******** 2025-04-17 00:33:29.053985 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:29.057722 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:29.057848 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:29.058627 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:29.059718 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:29.062500 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:29.063178 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:29.063197 | orchestrator | 2025-04-17 00:33:29.064027 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-17 00:33:29.064375 | orchestrator | Thursday 17 April 2025 00:33:29 +0000 (0:00:01.024) 0:04:57.614 ******** 2025-04-17 00:33:30.133284 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:30.133533 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:30.134356 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:30.135957 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:30.137534 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:30.138357 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:30.139358 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:30.140190 | orchestrator | 2025-04-17 00:33:30.141060 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-17 00:33:30.141129 | orchestrator | Thursday 17 April 2025 00:33:30 +0000 (0:00:01.078) 0:04:58.693 ******** 2025-04-17 00:33:30.195189 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:30.225234 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:33:30.255561 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:33:30.287862 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:33:30.319086 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:33:30.384962 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:33:30.385213 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:33:30.385966 | orchestrator | 2025-04-17 00:33:30.386795 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-17 00:33:30.387647 | orchestrator | Thursday 17 April 2025 00:33:30 +0000 (0:00:00.254) 0:04:58.947 ******** 2025-04-17 00:33:31.112451 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:31.116400 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:31.117548 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:31.118096 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:31.119839 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:31.121272 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:31.121694 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:31.121751 | orchestrator | 2025-04-17 00:33:31.124319 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-17 00:33:31.125788 | orchestrator | Thursday 17 April 2025 00:33:31 +0000 (0:00:00.726) 0:04:59.674 ******** 2025-04-17 00:33:31.479889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:33:31.480169 | orchestrator | 2025-04-17 00:33:31.480206 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-17 00:33:31.480756 | orchestrator | Thursday 17 April 2025 00:33:31 +0000 (0:00:00.366) 0:05:00.040 ******** 2025-04-17 00:33:38.975226 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:38.977814 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:38.978896 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:38.979018 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:38.979101 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:38.980206 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:38.980512 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:38.980996 | orchestrator | 2025-04-17 00:33:38.981648 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-17 00:33:38.982213 | orchestrator | Thursday 17 April 2025 00:33:38 +0000 (0:00:07.494) 0:05:07.535 ******** 2025-04-17 00:33:40.140829 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:40.141442 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:40.141462 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:40.141890 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:40.142862 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:40.143792 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:40.144070 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:40.144561 | orchestrator | 2025-04-17 00:33:40.145132 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-17 00:33:40.145643 | orchestrator | Thursday 17 April 2025 00:33:40 +0000 (0:00:01.164) 0:05:08.699 ******** 2025-04-17 00:33:41.072261 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:41.075254 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:41.075308 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:41.075783 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:41.075815 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:41.075830 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:41.075851 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:41.076387 | orchestrator | 2025-04-17 00:33:41.077136 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-17 00:33:41.077687 | orchestrator | Thursday 17 April 2025 00:33:41 +0000 (0:00:00.932) 0:05:09.632 ******** 2025-04-17 00:33:41.444477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:33:41.445797 | orchestrator | 2025-04-17 00:33:41.445821 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-17 00:33:41.446670 | orchestrator | Thursday 17 April 2025 00:33:41 +0000 (0:00:00.371) 0:05:10.003 ******** 2025-04-17 00:33:49.217962 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:49.218450 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:49.219915 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:49.223529 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:49.224654 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:49.225485 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:49.226910 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:49.227443 | orchestrator | 2025-04-17 00:33:49.227486 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-17 00:33:49.227889 | orchestrator | Thursday 17 April 2025 00:33:49 +0000 (0:00:07.773) 0:05:17.777 ******** 2025-04-17 00:33:49.794520 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:49.794877 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:49.794960 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:49.795663 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:49.796528 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:49.796843 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:49.797214 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:49.797588 | orchestrator | 2025-04-17 00:33:49.797926 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-17 00:33:49.798446 | orchestrator | Thursday 17 April 2025 00:33:49 +0000 (0:00:00.576) 0:05:18.354 ******** 2025-04-17 00:33:51.738410 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:51.739520 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:51.740126 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:51.740165 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:51.741933 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:51.743486 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:51.743863 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:51.743900 | orchestrator | 2025-04-17 00:33:51.744285 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-17 00:33:51.744887 | orchestrator | Thursday 17 April 2025 00:33:51 +0000 (0:00:01.944) 0:05:20.298 ******** 2025-04-17 00:33:52.737403 | orchestrator | changed: [testbed-manager] 2025-04-17 00:33:52.737622 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:33:52.737973 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:33:52.738217 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:33:52.741181 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:33:52.741495 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:33:52.742092 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:33:52.742129 | orchestrator | 2025-04-17 00:33:52.742553 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-17 00:33:52.743105 | orchestrator | Thursday 17 April 2025 00:33:52 +0000 (0:00:01.000) 0:05:21.298 ******** 2025-04-17 00:33:52.856485 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:52.886199 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:52.921690 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:52.950673 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:53.016863 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:53.017916 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:53.019523 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:53.020337 | orchestrator | 2025-04-17 00:33:53.020818 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-17 00:33:53.021301 | orchestrator | Thursday 17 April 2025 00:33:53 +0000 (0:00:00.281) 0:05:21.579 ******** 2025-04-17 00:33:53.116443 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:53.152468 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:53.199908 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:53.252621 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:53.317674 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:53.317903 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:53.318223 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:53.318243 | orchestrator | 2025-04-17 00:33:53.318954 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-17 00:33:53.319241 | orchestrator | Thursday 17 April 2025 00:33:53 +0000 (0:00:00.299) 0:05:21.879 ******** 2025-04-17 00:33:53.425313 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:53.457281 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:53.498095 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:53.535887 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:53.601669 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:53.601968 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:53.604989 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:53.605655 | orchestrator | 2025-04-17 00:33:53.606478 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-17 00:33:53.606919 | orchestrator | Thursday 17 April 2025 00:33:53 +0000 (0:00:00.284) 0:05:22.163 ******** 2025-04-17 00:33:59.042355 | orchestrator | ok: [testbed-manager] 2025-04-17 00:33:59.042659 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:33:59.043370 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:33:59.043660 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:33:59.044333 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:33:59.044432 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:33:59.044822 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:33:59.045113 | orchestrator | 2025-04-17 00:33:59.046176 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-17 00:33:59.420872 | orchestrator | Thursday 17 April 2025 00:33:59 +0000 (0:00:05.439) 0:05:27.603 ******** 2025-04-17 00:33:59.421022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:33:59.421218 | orchestrator | 2025-04-17 00:33:59.422270 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-17 00:33:59.425259 | orchestrator | Thursday 17 April 2025 00:33:59 +0000 (0:00:00.378) 0:05:27.981 ******** 2025-04-17 00:33:59.496012 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-17 00:33:59.496612 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-17 00:33:59.496649 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-17 00:33:59.538311 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-17 00:33:59.538615 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:33:59.539352 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-17 00:33:59.539637 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-17 00:33:59.573850 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:33:59.615365 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:33:59.615596 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-17 00:33:59.616177 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-17 00:33:59.616929 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-17 00:33:59.618618 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-17 00:33:59.647158 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:33:59.722402 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:33:59.722622 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-17 00:33:59.723716 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-17 00:33:59.724514 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:33:59.725102 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-17 00:33:59.725482 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-17 00:33:59.726195 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:33:59.726821 | orchestrator | 2025-04-17 00:33:59.727338 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-17 00:33:59.727727 | orchestrator | Thursday 17 April 2025 00:33:59 +0000 (0:00:00.303) 0:05:28.285 ******** 2025-04-17 00:34:00.100331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:34:00.100665 | orchestrator | 2025-04-17 00:34:00.101303 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-17 00:34:00.102127 | orchestrator | Thursday 17 April 2025 00:34:00 +0000 (0:00:00.376) 0:05:28.662 ******** 2025-04-17 00:34:00.172811 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-17 00:34:00.173394 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-17 00:34:00.206224 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:34:00.248517 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:34:00.287080 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-17 00:34:00.287248 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-17 00:34:00.287295 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:34:00.326700 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-17 00:34:00.326907 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:34:00.402477 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-17 00:34:00.402709 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:34:00.403566 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:34:00.403959 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-17 00:34:00.406787 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:34:00.810825 | orchestrator | 2025-04-17 00:34:00.810935 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-17 00:34:00.810943 | orchestrator | Thursday 17 April 2025 00:34:00 +0000 (0:00:00.303) 0:05:28.965 ******** 2025-04-17 00:34:00.810962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:34:00.811253 | orchestrator | 2025-04-17 00:34:00.811578 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-17 00:34:00.815343 | orchestrator | Thursday 17 April 2025 00:34:00 +0000 (0:00:00.405) 0:05:29.370 ******** 2025-04-17 00:34:33.497517 | orchestrator | changed: [testbed-manager] 2025-04-17 00:34:33.497739 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:34:33.497792 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:34:33.497809 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:34:33.497825 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:34:33.497840 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:34:33.497855 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:34:33.497871 | orchestrator | 2025-04-17 00:34:33.497888 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-17 00:34:33.497912 | orchestrator | Thursday 17 April 2025 00:34:33 +0000 (0:00:32.677) 0:06:02.048 ******** 2025-04-17 00:34:40.728330 | orchestrator | changed: [testbed-manager] 2025-04-17 00:34:40.728593 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:34:40.728620 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:34:40.728637 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:34:40.728653 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:34:40.728668 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:34:40.728684 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:34:40.728706 | orchestrator | 2025-04-17 00:34:40.728907 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-17 00:34:40.729159 | orchestrator | Thursday 17 April 2025 00:34:40 +0000 (0:00:07.237) 0:06:09.286 ******** 2025-04-17 00:34:47.628965 | orchestrator | changed: [testbed-manager] 2025-04-17 00:34:47.629450 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:34:47.631146 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:34:47.633048 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:34:47.634353 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:34:47.635557 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:34:47.636407 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:34:47.636863 | orchestrator | 2025-04-17 00:34:47.637750 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-17 00:34:47.638655 | orchestrator | Thursday 17 April 2025 00:34:47 +0000 (0:00:06.901) 0:06:16.187 ******** 2025-04-17 00:34:49.282385 | orchestrator | ok: [testbed-manager] 2025-04-17 00:34:49.282697 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:34:49.283266 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:34:49.283787 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:34:49.284535 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:34:49.285944 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:34:49.286184 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:34:49.286648 | orchestrator | 2025-04-17 00:34:49.287081 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-17 00:34:49.288444 | orchestrator | Thursday 17 April 2025 00:34:49 +0000 (0:00:01.655) 0:06:17.842 ******** 2025-04-17 00:34:54.607506 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:34:54.607751 | orchestrator | changed: [testbed-manager] 2025-04-17 00:34:54.609766 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:34:54.610311 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:34:54.610366 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:34:54.610812 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:34:54.611218 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:34:54.611975 | orchestrator | 2025-04-17 00:34:54.612658 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-17 00:34:54.615033 | orchestrator | Thursday 17 April 2025 00:34:54 +0000 (0:00:05.325) 0:06:23.168 ******** 2025-04-17 00:34:55.017861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:34:55.018565 | orchestrator | 2025-04-17 00:34:55.019909 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-17 00:34:55.020949 | orchestrator | Thursday 17 April 2025 00:34:55 +0000 (0:00:00.410) 0:06:23.578 ******** 2025-04-17 00:34:55.731335 | orchestrator | changed: [testbed-manager] 2025-04-17 00:34:55.732173 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:34:55.733615 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:34:55.734513 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:34:55.735374 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:34:55.736237 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:34:55.736854 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:34:55.737416 | orchestrator | 2025-04-17 00:34:55.738082 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-17 00:34:55.738758 | orchestrator | Thursday 17 April 2025 00:34:55 +0000 (0:00:00.714) 0:06:24.293 ******** 2025-04-17 00:34:57.357709 | orchestrator | ok: [testbed-manager] 2025-04-17 00:34:57.358092 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:34:57.358550 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:34:57.359720 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:34:57.360184 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:34:57.361818 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:34:57.361896 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:34:57.361943 | orchestrator | 2025-04-17 00:34:57.362172 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-17 00:34:57.362825 | orchestrator | Thursday 17 April 2025 00:34:57 +0000 (0:00:01.625) 0:06:25.918 ******** 2025-04-17 00:34:58.109841 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:34:58.110505 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:34:58.111127 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:34:58.111682 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:34:58.112232 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:34:58.113240 | orchestrator | changed: [testbed-manager] 2025-04-17 00:34:58.113521 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:34:58.114173 | orchestrator | 2025-04-17 00:34:58.114686 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-17 00:34:58.115210 | orchestrator | Thursday 17 April 2025 00:34:58 +0000 (0:00:00.751) 0:06:26.670 ******** 2025-04-17 00:34:58.200970 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:34:58.233056 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:34:58.262954 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:34:58.292894 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:34:58.361021 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:34:58.361404 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:34:58.362573 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:34:58.363431 | orchestrator | 2025-04-17 00:34:58.363974 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-17 00:34:58.365840 | orchestrator | Thursday 17 April 2025 00:34:58 +0000 (0:00:00.252) 0:06:26.922 ******** 2025-04-17 00:34:58.427233 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:34:58.467244 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:34:58.500840 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:34:58.532147 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:34:58.567510 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:34:58.756219 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:34:58.756442 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:34:58.756956 | orchestrator | 2025-04-17 00:34:58.757733 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-17 00:34:58.758255 | orchestrator | Thursday 17 April 2025 00:34:58 +0000 (0:00:00.395) 0:06:27.317 ******** 2025-04-17 00:34:58.858598 | orchestrator | ok: [testbed-manager] 2025-04-17 00:34:58.889597 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:34:58.922494 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:34:58.955899 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:34:59.024647 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:34:59.025283 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:34:59.026558 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:34:59.026834 | orchestrator | 2025-04-17 00:34:59.028312 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-17 00:34:59.029534 | orchestrator | Thursday 17 April 2025 00:34:59 +0000 (0:00:00.268) 0:06:27.586 ******** 2025-04-17 00:34:59.142094 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:34:59.178864 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:34:59.212046 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:34:59.243173 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:34:59.303322 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:34:59.303962 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:34:59.304765 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:34:59.306187 | orchestrator | 2025-04-17 00:34:59.306717 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-17 00:34:59.307727 | orchestrator | Thursday 17 April 2025 00:34:59 +0000 (0:00:00.274) 0:06:27.860 ******** 2025-04-17 00:34:59.396714 | orchestrator | ok: [testbed-manager] 2025-04-17 00:34:59.435729 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:34:59.465129 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:34:59.493646 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:34:59.562412 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:34:59.563839 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:34:59.564612 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:34:59.565617 | orchestrator | 2025-04-17 00:34:59.566148 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-17 00:34:59.566930 | orchestrator | Thursday 17 April 2025 00:34:59 +0000 (0:00:00.263) 0:06:28.124 ******** 2025-04-17 00:34:59.637568 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:34:59.675478 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:34:59.707122 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:34:59.739248 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:34:59.769194 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:34:59.838352 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:34:59.838522 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:34:59.839654 | orchestrator | 2025-04-17 00:34:59.842121 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-17 00:34:59.927959 | orchestrator | Thursday 17 April 2025 00:34:59 +0000 (0:00:00.276) 0:06:28.400 ******** 2025-04-17 00:34:59.928115 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:34:59.960118 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:34:59.989611 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:35:00.034376 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:35:00.174918 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:35:00.175190 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:35:00.178264 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:35:00.178721 | orchestrator | 2025-04-17 00:35:00.178762 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-17 00:35:00.179732 | orchestrator | Thursday 17 April 2025 00:35:00 +0000 (0:00:00.335) 0:06:28.735 ******** 2025-04-17 00:35:00.565239 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:35:00.567756 | orchestrator | 2025-04-17 00:35:00.575191 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-17 00:35:00.575277 | orchestrator | Thursday 17 April 2025 00:35:00 +0000 (0:00:00.389) 0:06:29.124 ******** 2025-04-17 00:35:01.411235 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:01.411753 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:35:01.413704 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:35:01.414301 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:35:01.414342 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:35:01.414821 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:35:01.417477 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:35:01.418132 | orchestrator | 2025-04-17 00:35:01.418389 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-17 00:35:01.418647 | orchestrator | Thursday 17 April 2025 00:35:01 +0000 (0:00:00.845) 0:06:29.970 ******** 2025-04-17 00:35:04.293344 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:35:04.293548 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:35:04.293578 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:35:04.294357 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:35:04.294906 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:35:04.295336 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:35:04.296511 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:04.297028 | orchestrator | 2025-04-17 00:35:04.297980 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-17 00:35:04.298480 | orchestrator | Thursday 17 April 2025 00:35:04 +0000 (0:00:02.884) 0:06:32.854 ******** 2025-04-17 00:35:04.359169 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-17 00:35:04.460646 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-17 00:35:04.460927 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-17 00:35:04.461210 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-17 00:35:04.462343 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-17 00:35:04.463731 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-17 00:35:04.529204 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:35:04.529359 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-17 00:35:04.529711 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-17 00:35:04.599601 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-17 00:35:04.599920 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:35:04.600920 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-17 00:35:04.602182 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-17 00:35:04.678203 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:35:04.679210 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-17 00:35:04.679653 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-17 00:35:04.680949 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-17 00:35:04.682482 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-17 00:35:04.742220 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:35:04.888266 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-17 00:35:04.888445 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:35:04.890154 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-17 00:35:04.894730 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-17 00:35:04.894897 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:35:04.894921 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-17 00:35:04.894942 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-17 00:35:04.896474 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-17 00:35:04.897664 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:35:04.899636 | orchestrator | 2025-04-17 00:35:04.900823 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-17 00:35:04.903324 | orchestrator | Thursday 17 April 2025 00:35:04 +0000 (0:00:00.594) 0:06:33.448 ******** 2025-04-17 00:35:11.182615 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:11.183325 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:11.184829 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:11.186712 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:11.187110 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:11.187884 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:11.188727 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:11.189563 | orchestrator | 2025-04-17 00:35:11.192738 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-17 00:35:11.193017 | orchestrator | Thursday 17 April 2025 00:35:11 +0000 (0:00:06.294) 0:06:39.742 ******** 2025-04-17 00:35:12.211936 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:12.212327 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:12.212878 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:12.215878 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:12.215962 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:12.215983 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:12.216691 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:12.217309 | orchestrator | 2025-04-17 00:35:12.217890 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-17 00:35:12.218582 | orchestrator | Thursday 17 April 2025 00:35:12 +0000 (0:00:01.029) 0:06:40.772 ******** 2025-04-17 00:35:19.180884 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:19.181656 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:19.181708 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:19.182169 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:19.182700 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:19.185242 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:19.186304 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:19.186705 | orchestrator | 2025-04-17 00:35:19.188190 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-17 00:35:19.188417 | orchestrator | Thursday 17 April 2025 00:35:19 +0000 (0:00:06.966) 0:06:47.739 ******** 2025-04-17 00:35:22.261770 | orchestrator | changed: [testbed-manager] 2025-04-17 00:35:22.262156 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:22.262420 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:22.263632 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:22.264252 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:22.265176 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:22.266566 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:22.266867 | orchestrator | 2025-04-17 00:35:22.267552 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-17 00:35:22.268137 | orchestrator | Thursday 17 April 2025 00:35:22 +0000 (0:00:03.083) 0:06:50.823 ******** 2025-04-17 00:35:23.647784 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:23.648032 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:23.648472 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:23.649149 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:23.650293 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:23.650730 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:23.650879 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:23.651678 | orchestrator | 2025-04-17 00:35:23.652477 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-17 00:35:23.652923 | orchestrator | Thursday 17 April 2025 00:35:23 +0000 (0:00:01.384) 0:06:52.207 ******** 2025-04-17 00:35:24.949007 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:24.950149 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:24.950183 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:24.950197 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:24.950217 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:24.950383 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:24.950409 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:24.951612 | orchestrator | 2025-04-17 00:35:24.953187 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-17 00:35:24.955155 | orchestrator | Thursday 17 April 2025 00:35:24 +0000 (0:00:01.295) 0:06:53.503 ******** 2025-04-17 00:35:25.163378 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:35:25.231875 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:35:25.297311 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:35:25.373634 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:35:25.519723 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:35:25.519986 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:35:25.520976 | orchestrator | changed: [testbed-manager] 2025-04-17 00:35:25.521718 | orchestrator | 2025-04-17 00:35:25.523027 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-17 00:35:25.524257 | orchestrator | Thursday 17 April 2025 00:35:25 +0000 (0:00:00.578) 0:06:54.082 ******** 2025-04-17 00:35:34.471601 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:34.472323 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:34.473429 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:34.474163 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:34.476487 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:34.477164 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:34.478149 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:34.479063 | orchestrator | 2025-04-17 00:35:34.479990 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-17 00:35:34.480630 | orchestrator | Thursday 17 April 2025 00:35:34 +0000 (0:00:08.948) 0:07:03.030 ******** 2025-04-17 00:35:36.307687 | orchestrator | changed: [testbed-manager] 2025-04-17 00:35:36.308190 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:36.309411 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:36.309480 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:36.309867 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:36.311611 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:36.313117 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:36.313878 | orchestrator | 2025-04-17 00:35:36.315117 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-17 00:35:36.316117 | orchestrator | Thursday 17 April 2025 00:35:36 +0000 (0:00:01.833) 0:07:04.864 ******** 2025-04-17 00:35:48.162270 | orchestrator | ok: [testbed-manager] 2025-04-17 00:35:48.162650 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:35:48.162687 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:35:48.162702 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:35:48.162717 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:35:48.162738 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:35:48.163278 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:35:48.164183 | orchestrator | 2025-04-17 00:35:48.164472 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-17 00:35:48.165185 | orchestrator | Thursday 17 April 2025 00:35:48 +0000 (0:00:11.851) 0:07:16.716 ******** 2025-04-17 00:36:00.143538 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:00.144908 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:00.144948 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:00.144961 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:00.144982 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:00.145268 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:00.145735 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:00.146298 | orchestrator | 2025-04-17 00:36:00.146874 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-17 00:36:00.147554 | orchestrator | Thursday 17 April 2025 00:36:00 +0000 (0:00:11.985) 0:07:28.701 ******** 2025-04-17 00:36:00.553194 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-17 00:36:00.553691 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-17 00:36:01.328152 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-17 00:36:01.328505 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-17 00:36:01.330505 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-17 00:36:01.331247 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-17 00:36:01.331371 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-17 00:36:01.331933 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-17 00:36:01.332608 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-17 00:36:01.333135 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-17 00:36:01.333988 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-17 00:36:01.334412 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-17 00:36:01.334936 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-17 00:36:01.335865 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-17 00:36:01.336427 | orchestrator | 2025-04-17 00:36:01.336623 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-17 00:36:01.337224 | orchestrator | Thursday 17 April 2025 00:36:01 +0000 (0:00:01.185) 0:07:29.886 ******** 2025-04-17 00:36:01.457684 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:01.520021 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:01.582507 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:01.650257 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:01.723601 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:01.851203 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:05.455668 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:05.456561 | orchestrator | 2025-04-17 00:36:05.456637 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-17 00:36:05.456666 | orchestrator | Thursday 17 April 2025 00:36:01 +0000 (0:00:00.523) 0:07:30.410 ******** 2025-04-17 00:36:05.456713 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:05.457712 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:05.457782 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:05.457832 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:05.457955 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:05.458351 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:05.458837 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:05.459071 | orchestrator | 2025-04-17 00:36:05.459446 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-17 00:36:05.459838 | orchestrator | Thursday 17 April 2025 00:36:05 +0000 (0:00:03.604) 0:07:34.015 ******** 2025-04-17 00:36:05.560576 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:05.706222 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:05.765148 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:05.816070 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:05.873161 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:05.959636 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:05.959867 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:05.960753 | orchestrator | 2025-04-17 00:36:05.961415 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-17 00:36:05.962175 | orchestrator | Thursday 17 April 2025 00:36:05 +0000 (0:00:00.505) 0:07:34.521 ******** 2025-04-17 00:36:06.015316 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-17 00:36:06.105090 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-17 00:36:06.105201 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:06.105745 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-17 00:36:06.106695 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-17 00:36:06.160322 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:06.160491 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-17 00:36:06.160891 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-17 00:36:06.221433 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:06.221619 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-17 00:36:06.222006 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-17 00:36:06.284196 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:06.284407 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-17 00:36:06.284907 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-17 00:36:06.341037 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:06.447148 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-17 00:36:06.448242 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-17 00:36:06.448992 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:06.449486 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-17 00:36:06.450083 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-17 00:36:06.450667 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:06.451135 | orchestrator | 2025-04-17 00:36:06.451766 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-17 00:36:06.452771 | orchestrator | Thursday 17 April 2025 00:36:06 +0000 (0:00:00.486) 0:07:35.007 ******** 2025-04-17 00:36:06.564694 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:06.629969 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:06.686785 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:06.741410 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:06.794731 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:06.894269 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:06.894712 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:06.895905 | orchestrator | 2025-04-17 00:36:06.897456 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-17 00:36:06.899878 | orchestrator | Thursday 17 April 2025 00:36:06 +0000 (0:00:00.448) 0:07:35.456 ******** 2025-04-17 00:36:07.007999 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:07.061892 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:07.114882 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:07.167110 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:07.221761 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:07.313131 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:07.315539 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:07.316561 | orchestrator | 2025-04-17 00:36:07.316637 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-17 00:36:07.317657 | orchestrator | Thursday 17 April 2025 00:36:07 +0000 (0:00:00.417) 0:07:35.873 ******** 2025-04-17 00:36:07.421642 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:07.477295 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:07.546468 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:07.608531 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:07.662979 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:07.771481 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:07.771756 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:07.772262 | orchestrator | 2025-04-17 00:36:07.773674 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-17 00:36:07.774745 | orchestrator | Thursday 17 April 2025 00:36:07 +0000 (0:00:00.452) 0:07:36.326 ******** 2025-04-17 00:36:13.525190 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:13.525870 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:13.526332 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:13.527609 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:13.528299 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:13.529640 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:13.530743 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:13.533329 | orchestrator | 2025-04-17 00:36:13.534678 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-17 00:36:13.535751 | orchestrator | Thursday 17 April 2025 00:36:13 +0000 (0:00:05.760) 0:07:42.086 ******** 2025-04-17 00:36:14.357076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:36:14.357534 | orchestrator | 2025-04-17 00:36:14.360581 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-17 00:36:15.167941 | orchestrator | Thursday 17 April 2025 00:36:14 +0000 (0:00:00.829) 0:07:42.915 ******** 2025-04-17 00:36:15.168116 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:15.168576 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:15.169390 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:15.169437 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:15.169784 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:15.170453 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:15.170759 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:15.171595 | orchestrator | 2025-04-17 00:36:15.172190 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-17 00:36:15.173193 | orchestrator | Thursday 17 April 2025 00:36:15 +0000 (0:00:00.813) 0:07:43.728 ******** 2025-04-17 00:36:16.172927 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:16.173781 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:16.176243 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:16.176767 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:16.177899 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:16.178290 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:16.179510 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:16.179802 | orchestrator | 2025-04-17 00:36:16.180493 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-17 00:36:16.181138 | orchestrator | Thursday 17 April 2025 00:36:16 +0000 (0:00:01.003) 0:07:44.732 ******** 2025-04-17 00:36:17.492206 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:17.492391 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:17.494223 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:17.495316 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:17.497150 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:17.497727 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:17.498479 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:17.498885 | orchestrator | 2025-04-17 00:36:17.498916 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-17 00:36:17.499612 | orchestrator | Thursday 17 April 2025 00:36:17 +0000 (0:00:01.319) 0:07:46.051 ******** 2025-04-17 00:36:17.623936 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:18.821317 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:18.822259 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:18.822292 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:18.823298 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:18.823941 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:18.824984 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:18.825751 | orchestrator | 2025-04-17 00:36:18.826439 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-17 00:36:18.826798 | orchestrator | Thursday 17 April 2025 00:36:18 +0000 (0:00:01.329) 0:07:47.381 ******** 2025-04-17 00:36:20.201016 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:20.201959 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:20.203303 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:20.204865 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:20.205883 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:20.206958 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:20.208191 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:20.208812 | orchestrator | 2025-04-17 00:36:20.209082 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-17 00:36:20.209458 | orchestrator | Thursday 17 April 2025 00:36:20 +0000 (0:00:01.378) 0:07:48.759 ******** 2025-04-17 00:36:21.517146 | orchestrator | changed: [testbed-manager] 2025-04-17 00:36:21.518302 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:21.521398 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:21.521862 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:21.521892 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:21.521907 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:21.521922 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:21.521941 | orchestrator | 2025-04-17 00:36:21.522409 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-17 00:36:21.523022 | orchestrator | Thursday 17 April 2025 00:36:21 +0000 (0:00:01.316) 0:07:50.076 ******** 2025-04-17 00:36:22.510000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:36:22.510258 | orchestrator | 2025-04-17 00:36:22.513323 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-17 00:36:23.815011 | orchestrator | Thursday 17 April 2025 00:36:22 +0000 (0:00:00.992) 0:07:51.069 ******** 2025-04-17 00:36:23.815189 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:23.815880 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:23.815916 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:23.815966 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:23.817171 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:23.817547 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:23.818428 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:23.820983 | orchestrator | 2025-04-17 00:36:23.822608 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-17 00:36:23.823221 | orchestrator | Thursday 17 April 2025 00:36:23 +0000 (0:00:01.303) 0:07:52.372 ******** 2025-04-17 00:36:24.931523 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:24.931738 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:24.933339 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:24.934318 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:24.935093 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:24.935783 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:24.936625 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:24.937732 | orchestrator | 2025-04-17 00:36:24.938292 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-17 00:36:24.940860 | orchestrator | Thursday 17 April 2025 00:36:24 +0000 (0:00:01.117) 0:07:53.489 ******** 2025-04-17 00:36:26.160409 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:26.161130 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:26.161456 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:26.162113 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:26.162967 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:26.165139 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:26.165996 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:26.167647 | orchestrator | 2025-04-17 00:36:26.167790 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-17 00:36:26.168740 | orchestrator | Thursday 17 April 2025 00:36:26 +0000 (0:00:01.228) 0:07:54.717 ******** 2025-04-17 00:36:27.274402 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:27.277130 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:27.277180 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:27.278535 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:27.278573 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:27.279338 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:27.279365 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:27.279382 | orchestrator | 2025-04-17 00:36:27.279401 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-17 00:36:27.279425 | orchestrator | Thursday 17 April 2025 00:36:27 +0000 (0:00:01.113) 0:07:55.831 ******** 2025-04-17 00:36:28.396893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:36:28.398705 | orchestrator | 2025-04-17 00:36:28.400204 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-17 00:36:28.400979 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.845) 0:07:56.677 ******** 2025-04-17 00:36:28.401016 | orchestrator | 2025-04-17 00:36:28.401947 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-17 00:36:28.402724 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.037) 0:07:56.714 ******** 2025-04-17 00:36:28.403523 | orchestrator | 2025-04-17 00:36:28.404293 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-17 00:36:28.406310 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.043) 0:07:56.757 ******** 2025-04-17 00:36:28.407397 | orchestrator | 2025-04-17 00:36:28.407944 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-17 00:36:28.408546 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.037) 0:07:56.795 ******** 2025-04-17 00:36:28.409423 | orchestrator | 2025-04-17 00:36:28.409988 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-17 00:36:28.410064 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.038) 0:07:56.833 ******** 2025-04-17 00:36:28.411217 | orchestrator | 2025-04-17 00:36:28.411816 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-17 00:36:28.411845 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.044) 0:07:56.878 ******** 2025-04-17 00:36:28.412667 | orchestrator | 2025-04-17 00:36:28.412889 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-17 00:36:28.415382 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.038) 0:07:56.916 ******** 2025-04-17 00:36:29.462714 | orchestrator | 2025-04-17 00:36:29.462973 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-17 00:36:29.463001 | orchestrator | Thursday 17 April 2025 00:36:28 +0000 (0:00:00.038) 0:07:56.955 ******** 2025-04-17 00:36:29.463039 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:29.463128 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:29.463592 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:29.464119 | orchestrator | 2025-04-17 00:36:29.464581 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-17 00:36:29.465244 | orchestrator | Thursday 17 April 2025 00:36:29 +0000 (0:00:01.063) 0:07:58.019 ******** 2025-04-17 00:36:30.956284 | orchestrator | changed: [testbed-manager] 2025-04-17 00:36:30.956804 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:30.958140 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:30.958446 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:30.960247 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:30.960742 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:30.961589 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:30.962413 | orchestrator | 2025-04-17 00:36:30.963029 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-17 00:36:30.963994 | orchestrator | Thursday 17 April 2025 00:36:30 +0000 (0:00:01.496) 0:07:59.515 ******** 2025-04-17 00:36:32.024047 | orchestrator | changed: [testbed-manager] 2025-04-17 00:36:32.024303 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:32.025680 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:32.027299 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:32.028056 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:32.028090 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:32.029000 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:32.029876 | orchestrator | 2025-04-17 00:36:32.030627 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-17 00:36:32.031273 | orchestrator | Thursday 17 April 2025 00:36:32 +0000 (0:00:01.067) 0:08:00.582 ******** 2025-04-17 00:36:32.150496 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:34.081416 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:34.081638 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:34.082183 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:34.082985 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:34.083708 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:34.084418 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:34.084991 | orchestrator | 2025-04-17 00:36:34.085269 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-17 00:36:34.085920 | orchestrator | Thursday 17 April 2025 00:36:34 +0000 (0:00:02.059) 0:08:02.642 ******** 2025-04-17 00:36:34.175009 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:34.175419 | orchestrator | 2025-04-17 00:36:34.175495 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-17 00:36:34.176010 | orchestrator | Thursday 17 April 2025 00:36:34 +0000 (0:00:00.092) 0:08:02.735 ******** 2025-04-17 00:36:35.163403 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:35.163807 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:35.164507 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:35.165269 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:35.165694 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:35.167017 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:35.167480 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:35.167520 | orchestrator | 2025-04-17 00:36:35.167546 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-17 00:36:35.168317 | orchestrator | Thursday 17 April 2025 00:36:35 +0000 (0:00:00.988) 0:08:03.723 ******** 2025-04-17 00:36:35.294665 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:35.356958 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:35.591227 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:35.656184 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:35.717881 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:35.833779 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:35.834202 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:35.834243 | orchestrator | 2025-04-17 00:36:35.834730 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-17 00:36:35.835136 | orchestrator | Thursday 17 April 2025 00:36:35 +0000 (0:00:00.670) 0:08:04.394 ******** 2025-04-17 00:36:36.698895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:36:36.699332 | orchestrator | 2025-04-17 00:36:36.699610 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-17 00:36:36.702391 | orchestrator | Thursday 17 April 2025 00:36:36 +0000 (0:00:00.863) 0:08:05.257 ******** 2025-04-17 00:36:37.103717 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:37.533786 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:37.534350 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:37.534508 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:37.534741 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:37.535957 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:37.536543 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:37.537211 | orchestrator | 2025-04-17 00:36:37.537863 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-17 00:36:37.538132 | orchestrator | Thursday 17 April 2025 00:36:37 +0000 (0:00:00.837) 0:08:06.095 ******** 2025-04-17 00:36:40.086117 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-17 00:36:40.086364 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-17 00:36:40.086393 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-17 00:36:40.086417 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-17 00:36:40.086903 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-17 00:36:40.086949 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-17 00:36:40.088167 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-17 00:36:40.088875 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-17 00:36:40.089030 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-17 00:36:40.089346 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-17 00:36:40.090274 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-17 00:36:40.090554 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-17 00:36:40.090960 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-17 00:36:40.091429 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-17 00:36:40.091781 | orchestrator | 2025-04-17 00:36:40.092176 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-17 00:36:40.092539 | orchestrator | Thursday 17 April 2025 00:36:40 +0000 (0:00:02.549) 0:08:08.644 ******** 2025-04-17 00:36:40.220001 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:40.280907 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:40.349402 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:40.412948 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:40.473698 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:40.569564 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:40.569892 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:40.569936 | orchestrator | 2025-04-17 00:36:40.571060 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-17 00:36:40.571397 | orchestrator | Thursday 17 April 2025 00:36:40 +0000 (0:00:00.482) 0:08:09.127 ******** 2025-04-17 00:36:41.335813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:36:41.336112 | orchestrator | 2025-04-17 00:36:41.336891 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-17 00:36:41.337502 | orchestrator | Thursday 17 April 2025 00:36:41 +0000 (0:00:00.767) 0:08:09.895 ******** 2025-04-17 00:36:42.311587 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:42.311936 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:42.311966 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:42.311984 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:42.313211 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:42.313389 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:42.313417 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:42.313771 | orchestrator | 2025-04-17 00:36:42.314322 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-17 00:36:42.314713 | orchestrator | Thursday 17 April 2025 00:36:42 +0000 (0:00:00.975) 0:08:10.871 ******** 2025-04-17 00:36:42.732168 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:43.097523 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:43.098728 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:43.099696 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:43.100274 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:43.102138 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:43.102667 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:43.103304 | orchestrator | 2025-04-17 00:36:43.103957 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-17 00:36:43.104443 | orchestrator | Thursday 17 April 2025 00:36:43 +0000 (0:00:00.787) 0:08:11.658 ******** 2025-04-17 00:36:43.231116 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:43.296149 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:43.361711 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:43.429127 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:43.492428 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:43.591098 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:43.592182 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:43.592461 | orchestrator | 2025-04-17 00:36:43.593149 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-17 00:36:43.594234 | orchestrator | Thursday 17 April 2025 00:36:43 +0000 (0:00:00.491) 0:08:12.150 ******** 2025-04-17 00:36:44.937412 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:44.939873 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:44.939933 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:44.939993 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:44.940156 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:44.940180 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:44.940194 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:44.940208 | orchestrator | 2025-04-17 00:36:44.940228 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-17 00:36:44.940546 | orchestrator | Thursday 17 April 2025 00:36:44 +0000 (0:00:01.346) 0:08:13.496 ******** 2025-04-17 00:36:45.068191 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:45.148577 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:45.218165 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:45.275194 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:45.341025 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:45.425203 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:45.425454 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:45.426695 | orchestrator | 2025-04-17 00:36:45.427437 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-17 00:36:45.433567 | orchestrator | Thursday 17 April 2025 00:36:45 +0000 (0:00:00.488) 0:08:13.985 ******** 2025-04-17 00:36:47.275912 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:47.276599 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:47.277636 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:47.278244 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:47.279264 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:47.279641 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:47.280115 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:47.280710 | orchestrator | 2025-04-17 00:36:47.281155 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-17 00:36:47.281685 | orchestrator | Thursday 17 April 2025 00:36:47 +0000 (0:00:01.847) 0:08:15.832 ******** 2025-04-17 00:36:48.613118 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:48.613325 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:48.613355 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:48.614561 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:48.615637 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:48.617805 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:48.618900 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:48.619818 | orchestrator | 2025-04-17 00:36:48.621434 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-17 00:36:51.412476 | orchestrator | Thursday 17 April 2025 00:36:48 +0000 (0:00:01.338) 0:08:17.171 ******** 2025-04-17 00:36:51.412659 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:51.413234 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:51.413724 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:51.417465 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:51.417945 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:51.417977 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:51.418512 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:51.419137 | orchestrator | 2025-04-17 00:36:51.419809 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-17 00:36:51.420259 | orchestrator | Thursday 17 April 2025 00:36:51 +0000 (0:00:02.799) 0:08:19.971 ******** 2025-04-17 00:36:52.961421 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:52.962001 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:36:52.966789 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:36:52.967501 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:36:52.967557 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:36:52.968402 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:36:52.969589 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:36:52.970457 | orchestrator | 2025-04-17 00:36:52.971510 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-17 00:36:52.972419 | orchestrator | Thursday 17 April 2025 00:36:52 +0000 (0:00:01.548) 0:08:21.519 ******** 2025-04-17 00:36:53.441895 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:53.510797 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:53.586520 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:54.001383 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:54.002173 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:54.003430 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:54.004556 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:54.005267 | orchestrator | 2025-04-17 00:36:54.006243 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-17 00:36:54.006949 | orchestrator | Thursday 17 April 2025 00:36:53 +0000 (0:00:01.042) 0:08:22.562 ******** 2025-04-17 00:36:54.135647 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:54.200868 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:54.265953 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:54.335360 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:54.398365 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:54.814312 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:54.814993 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:54.816291 | orchestrator | 2025-04-17 00:36:54.817253 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-17 00:36:54.817598 | orchestrator | Thursday 17 April 2025 00:36:54 +0000 (0:00:00.810) 0:08:23.373 ******** 2025-04-17 00:36:54.941259 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:36:55.008432 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:36:55.068797 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:36:55.131441 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:36:55.200495 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:36:55.304763 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:36:55.305051 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:36:55.308304 | orchestrator | 2025-04-17 00:36:55.436469 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-17 00:36:55.436613 | orchestrator | Thursday 17 April 2025 00:36:55 +0000 (0:00:00.490) 0:08:23.863 ******** 2025-04-17 00:36:55.436653 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:55.508835 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:55.570101 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:55.650415 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:55.713787 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:55.816510 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:55.817092 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:55.818168 | orchestrator | 2025-04-17 00:36:55.819962 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-17 00:36:55.821180 | orchestrator | Thursday 17 April 2025 00:36:55 +0000 (0:00:00.512) 0:08:24.376 ******** 2025-04-17 00:36:56.103786 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:56.169813 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:56.233212 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:56.304567 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:56.368234 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:56.463332 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:56.464238 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:56.465259 | orchestrator | 2025-04-17 00:36:56.468315 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-17 00:36:56.597452 | orchestrator | Thursday 17 April 2025 00:36:56 +0000 (0:00:00.646) 0:08:25.023 ******** 2025-04-17 00:36:56.597621 | orchestrator | ok: [testbed-manager] 2025-04-17 00:36:56.659909 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:36:56.732695 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:36:56.793485 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:36:56.855890 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:36:56.961464 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:36:56.962440 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:36:56.962635 | orchestrator | 2025-04-17 00:36:56.963129 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-17 00:36:56.966537 | orchestrator | Thursday 17 April 2025 00:36:56 +0000 (0:00:00.500) 0:08:25.523 ******** 2025-04-17 00:37:02.484231 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:02.484525 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:02.484563 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:02.484882 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:02.485406 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:02.485584 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:02.486728 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:02.487044 | orchestrator | 2025-04-17 00:37:02.487353 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-17 00:37:02.487382 | orchestrator | Thursday 17 April 2025 00:37:02 +0000 (0:00:05.519) 0:08:31.043 ******** 2025-04-17 00:37:02.702563 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:37:02.769442 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:37:02.841159 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:37:02.901564 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:37:03.002342 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:37:03.004490 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:37:03.004902 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:37:03.004963 | orchestrator | 2025-04-17 00:37:03.005667 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-17 00:37:03.005977 | orchestrator | Thursday 17 April 2025 00:37:02 +0000 (0:00:00.519) 0:08:31.562 ******** 2025-04-17 00:37:03.988112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:37:03.988369 | orchestrator | 2025-04-17 00:37:03.989415 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-17 00:37:03.989886 | orchestrator | Thursday 17 April 2025 00:37:03 +0000 (0:00:00.982) 0:08:32.545 ******** 2025-04-17 00:37:05.766438 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:05.767235 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:05.767283 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:05.768578 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:05.769395 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:05.769986 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:05.770604 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:05.771291 | orchestrator | 2025-04-17 00:37:05.771915 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-17 00:37:05.772447 | orchestrator | Thursday 17 April 2025 00:37:05 +0000 (0:00:01.778) 0:08:34.323 ******** 2025-04-17 00:37:06.838428 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:06.839115 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:06.840927 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:06.841532 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:06.842089 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:06.842926 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:06.843234 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:06.843779 | orchestrator | 2025-04-17 00:37:06.844640 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-17 00:37:06.845006 | orchestrator | Thursday 17 April 2025 00:37:06 +0000 (0:00:01.075) 0:08:35.399 ******** 2025-04-17 00:37:07.249306 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:07.669443 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:07.669657 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:07.670095 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:07.670409 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:07.671503 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:07.671635 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:07.672170 | orchestrator | 2025-04-17 00:37:07.672761 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-17 00:37:07.673445 | orchestrator | Thursday 17 April 2025 00:37:07 +0000 (0:00:00.830) 0:08:36.229 ******** 2025-04-17 00:37:09.571558 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-17 00:37:09.572265 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-17 00:37:09.573174 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-17 00:37:09.573934 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-17 00:37:09.580203 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-17 00:37:09.580753 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-17 00:37:09.581480 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-17 00:37:09.581760 | orchestrator | 2025-04-17 00:37:09.582238 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-17 00:37:09.582707 | orchestrator | Thursday 17 April 2025 00:37:09 +0000 (0:00:01.901) 0:08:38.131 ******** 2025-04-17 00:37:10.339927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:37:10.340199 | orchestrator | 2025-04-17 00:37:10.341411 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-17 00:37:10.342123 | orchestrator | Thursday 17 April 2025 00:37:10 +0000 (0:00:00.769) 0:08:38.900 ******** 2025-04-17 00:37:19.120624 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:19.121519 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:19.124880 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:19.125029 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:19.126119 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:19.126176 | orchestrator | changed: [testbed-manager] 2025-04-17 00:37:19.127607 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:19.128425 | orchestrator | 2025-04-17 00:37:19.129562 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-17 00:37:19.130501 | orchestrator | Thursday 17 April 2025 00:37:19 +0000 (0:00:08.779) 0:08:47.679 ******** 2025-04-17 00:37:20.914630 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:20.915008 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:20.915050 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:20.915989 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:20.920173 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:20.920538 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:20.921312 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:20.921976 | orchestrator | 2025-04-17 00:37:20.922729 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-17 00:37:20.923416 | orchestrator | Thursday 17 April 2025 00:37:20 +0000 (0:00:01.793) 0:08:49.472 ******** 2025-04-17 00:37:22.157429 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:22.158112 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:22.158335 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:22.159043 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:22.160297 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:22.160734 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:22.161263 | orchestrator | 2025-04-17 00:37:22.162155 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-17 00:37:22.162780 | orchestrator | Thursday 17 April 2025 00:37:22 +0000 (0:00:01.243) 0:08:50.716 ******** 2025-04-17 00:37:23.537248 | orchestrator | changed: [testbed-manager] 2025-04-17 00:37:23.539568 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:23.539620 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:23.540356 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:23.540384 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:23.540402 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:23.540426 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:23.540919 | orchestrator | 2025-04-17 00:37:23.541599 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-17 00:37:23.542315 | orchestrator | 2025-04-17 00:37:23.542516 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-17 00:37:23.543073 | orchestrator | Thursday 17 April 2025 00:37:23 +0000 (0:00:01.380) 0:08:52.096 ******** 2025-04-17 00:37:23.657447 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:37:23.724041 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:37:23.784544 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:37:23.849666 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:37:23.907247 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:37:24.020784 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:37:24.021656 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:37:24.021744 | orchestrator | 2025-04-17 00:37:24.022589 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-17 00:37:24.025549 | orchestrator | 2025-04-17 00:37:25.269240 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-17 00:37:25.269413 | orchestrator | Thursday 17 April 2025 00:37:24 +0000 (0:00:00.484) 0:08:52.580 ******** 2025-04-17 00:37:25.269453 | orchestrator | changed: [testbed-manager] 2025-04-17 00:37:25.270225 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:25.270745 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:25.270779 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:25.271338 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:25.271736 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:25.272720 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:25.273503 | orchestrator | 2025-04-17 00:37:25.273892 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-17 00:37:25.274993 | orchestrator | Thursday 17 April 2025 00:37:25 +0000 (0:00:01.247) 0:08:53.828 ******** 2025-04-17 00:37:26.647287 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:26.647980 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:26.648026 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:26.648668 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:26.649292 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:26.650098 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:26.650823 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:26.651590 | orchestrator | 2025-04-17 00:37:26.652092 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-17 00:37:26.652400 | orchestrator | Thursday 17 April 2025 00:37:26 +0000 (0:00:01.379) 0:08:55.207 ******** 2025-04-17 00:37:26.957935 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:37:27.019854 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:37:27.079581 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:37:27.150902 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:37:27.212115 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:37:27.611733 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:37:27.612996 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:37:27.613826 | orchestrator | 2025-04-17 00:37:27.615891 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-17 00:37:27.617202 | orchestrator | Thursday 17 April 2025 00:37:27 +0000 (0:00:00.965) 0:08:56.173 ******** 2025-04-17 00:37:28.824318 | orchestrator | changed: [testbed-manager] 2025-04-17 00:37:28.824519 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:28.826130 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:28.827532 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:28.828485 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:28.829402 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:28.830003 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:28.831069 | orchestrator | 2025-04-17 00:37:28.831818 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-17 00:37:28.832689 | orchestrator | 2025-04-17 00:37:28.833519 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-17 00:37:28.834427 | orchestrator | Thursday 17 April 2025 00:37:28 +0000 (0:00:01.211) 0:08:57.384 ******** 2025-04-17 00:37:29.759539 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:37:29.759749 | orchestrator | 2025-04-17 00:37:29.760630 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-17 00:37:29.761633 | orchestrator | Thursday 17 April 2025 00:37:29 +0000 (0:00:00.934) 0:08:58.318 ******** 2025-04-17 00:37:30.164415 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:30.576379 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:30.576606 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:30.577619 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:30.578822 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:30.579209 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:30.580263 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:30.580755 | orchestrator | 2025-04-17 00:37:30.581719 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-17 00:37:30.582080 | orchestrator | Thursday 17 April 2025 00:37:30 +0000 (0:00:00.815) 0:08:59.134 ******** 2025-04-17 00:37:31.683454 | orchestrator | changed: [testbed-manager] 2025-04-17 00:37:31.686215 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:31.686825 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:31.689167 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:31.689709 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:31.690418 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:31.690892 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:31.691741 | orchestrator | 2025-04-17 00:37:31.692200 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-17 00:37:31.692888 | orchestrator | Thursday 17 April 2025 00:37:31 +0000 (0:00:01.107) 0:09:00.242 ******** 2025-04-17 00:37:32.661536 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 00:37:32.662606 | orchestrator | 2025-04-17 00:37:32.663428 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-17 00:37:32.667021 | orchestrator | Thursday 17 April 2025 00:37:32 +0000 (0:00:00.977) 0:09:01.220 ******** 2025-04-17 00:37:33.070111 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:33.470736 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:33.471636 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:33.472770 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:33.473328 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:33.476169 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:33.476688 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:33.477376 | orchestrator | 2025-04-17 00:37:33.477850 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-17 00:37:33.478298 | orchestrator | Thursday 17 April 2025 00:37:33 +0000 (0:00:00.809) 0:09:02.030 ******** 2025-04-17 00:37:33.871450 | orchestrator | changed: [testbed-manager] 2025-04-17 00:37:34.531321 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:34.532117 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:34.532162 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:34.533527 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:34.534203 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:34.534591 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:34.534623 | orchestrator | 2025-04-17 00:37:34.535807 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:37:34.536033 | orchestrator | 2025-04-17 00:37:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:37:34.536372 | orchestrator | 2025-04-17 00:37:34 | INFO  | Please wait and do not abort execution. 2025-04-17 00:37:34.537070 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-17 00:37:34.537966 | orchestrator | testbed-node-0 : ok=168  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-17 00:37:34.538895 | orchestrator | testbed-node-1 : ok=168  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-17 00:37:34.539194 | orchestrator | testbed-node-2 : ok=168  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-17 00:37:34.539624 | orchestrator | testbed-node-3 : ok=167  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-17 00:37:34.540256 | orchestrator | testbed-node-4 : ok=167  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-17 00:37:34.540662 | orchestrator | testbed-node-5 : ok=167  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-17 00:37:34.541139 | orchestrator | 2025-04-17 00:37:34.543010 | orchestrator | Thursday 17 April 2025 00:37:34 +0000 (0:00:01.060) 0:09:03.091 ******** 2025-04-17 00:37:34.543564 | orchestrator | =============================================================================== 2025-04-17 00:37:34.544048 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.77s 2025-04-17 00:37:34.544551 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------ 62.03s 2025-04-17 00:37:34.545101 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.44s 2025-04-17 00:37:34.545435 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.68s 2025-04-17 00:37:34.545858 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 18.47s 2025-04-17 00:37:34.546425 | orchestrator | osism.commons.packages : Download upgrade packages --------------------- 15.11s 2025-04-17 00:37:34.546949 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.33s 2025-04-17 00:37:34.547462 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.99s 2025-04-17 00:37:34.547939 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.85s 2025-04-17 00:37:34.548243 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.24s 2025-04-17 00:37:34.548677 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.95s 2025-04-17 00:37:34.549818 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.78s 2025-04-17 00:37:34.550108 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.77s 2025-04-17 00:37:34.550681 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.49s 2025-04-17 00:37:34.550981 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.24s 2025-04-17 00:37:34.551377 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.97s 2025-04-17 00:37:34.551648 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 6.90s 2025-04-17 00:37:34.552008 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.29s 2025-04-17 00:37:34.552596 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.76s 2025-04-17 00:37:34.552917 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.66s 2025-04-17 00:37:35.230012 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-17 00:37:37.056286 | orchestrator | + osism apply network 2025-04-17 00:37:37.056418 | orchestrator | 2025-04-17 00:37:37 | INFO  | Task ee30def9-4323-4b79-8e22-ff6e3ff0b481 (network) was prepared for execution. 2025-04-17 00:37:40.262691 | orchestrator | 2025-04-17 00:37:37 | INFO  | It takes a moment until task ee30def9-4323-4b79-8e22-ff6e3ff0b481 (network) has been started and output is visible here. 2025-04-17 00:37:40.263008 | orchestrator | 2025-04-17 00:37:40.263113 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-17 00:37:40.266204 | orchestrator | 2025-04-17 00:37:40.422376 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-17 00:37:40.422522 | orchestrator | Thursday 17 April 2025 00:37:40 +0000 (0:00:00.199) 0:00:00.199 ******** 2025-04-17 00:37:40.422556 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:40.494323 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:40.567655 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:40.652164 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:40.725827 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:40.956585 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:40.957414 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:40.957459 | orchestrator | 2025-04-17 00:37:40.958255 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-17 00:37:40.959005 | orchestrator | Thursday 17 April 2025 00:37:40 +0000 (0:00:00.694) 0:00:00.893 ******** 2025-04-17 00:37:42.133858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 00:37:42.134227 | orchestrator | 2025-04-17 00:37:42.135232 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-17 00:37:42.136251 | orchestrator | Thursday 17 April 2025 00:37:42 +0000 (0:00:01.175) 0:00:02.069 ******** 2025-04-17 00:37:43.987671 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:43.988022 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:43.988065 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:43.988691 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:43.989848 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:43.990843 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:43.992336 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:43.993620 | orchestrator | 2025-04-17 00:37:43.993984 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-17 00:37:43.995653 | orchestrator | Thursday 17 April 2025 00:37:43 +0000 (0:00:01.856) 0:00:03.925 ******** 2025-04-17 00:37:45.688152 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:45.688512 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:45.689657 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:45.690747 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:45.691794 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:45.693136 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:45.693853 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:45.694849 | orchestrator | 2025-04-17 00:37:45.696553 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-17 00:37:45.697598 | orchestrator | Thursday 17 April 2025 00:37:45 +0000 (0:00:01.695) 0:00:05.621 ******** 2025-04-17 00:37:46.166375 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-17 00:37:46.741623 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-17 00:37:46.741833 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-17 00:37:46.742754 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-17 00:37:46.743608 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-17 00:37:46.744207 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-17 00:37:46.744807 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-17 00:37:46.745536 | orchestrator | 2025-04-17 00:37:46.746076 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-17 00:37:46.746728 | orchestrator | Thursday 17 April 2025 00:37:46 +0000 (0:00:01.058) 0:00:06.679 ******** 2025-04-17 00:37:48.584225 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-17 00:37:48.584930 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-17 00:37:48.586318 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-17 00:37:48.587408 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-17 00:37:48.589108 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-17 00:37:48.590082 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-17 00:37:48.590767 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-17 00:37:48.591438 | orchestrator | 2025-04-17 00:37:48.592335 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-17 00:37:48.593001 | orchestrator | Thursday 17 April 2025 00:37:48 +0000 (0:00:01.842) 0:00:08.521 ******** 2025-04-17 00:37:50.185337 | orchestrator | changed: [testbed-manager] 2025-04-17 00:37:50.186343 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:50.186408 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:50.186719 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:50.187786 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:50.188561 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:50.190555 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:50.191093 | orchestrator | 2025-04-17 00:37:50.191126 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-17 00:37:50.191151 | orchestrator | Thursday 17 April 2025 00:37:50 +0000 (0:00:01.597) 0:00:10.119 ******** 2025-04-17 00:37:50.618728 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-17 00:37:51.138557 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-17 00:37:51.138766 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-17 00:37:51.139105 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-17 00:37:51.142332 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-17 00:37:51.142638 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-17 00:37:51.143991 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-17 00:37:51.144800 | orchestrator | 2025-04-17 00:37:51.146281 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-17 00:37:51.146803 | orchestrator | Thursday 17 April 2025 00:37:51 +0000 (0:00:00.958) 0:00:11.078 ******** 2025-04-17 00:37:51.604862 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:51.696301 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:52.292949 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:52.294132 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:52.297246 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:52.298212 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:52.298262 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:52.299263 | orchestrator | 2025-04-17 00:37:52.300239 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-17 00:37:52.301076 | orchestrator | Thursday 17 April 2025 00:37:52 +0000 (0:00:01.149) 0:00:12.227 ******** 2025-04-17 00:37:52.449442 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:37:52.524201 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:37:52.599445 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:37:52.676928 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:37:52.747373 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:37:53.034445 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:37:53.035352 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:37:53.036326 | orchestrator | 2025-04-17 00:37:53.040595 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-17 00:37:54.966594 | orchestrator | Thursday 17 April 2025 00:37:53 +0000 (0:00:00.743) 0:00:12.971 ******** 2025-04-17 00:37:54.966781 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:54.967097 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:37:54.972744 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:37:54.973031 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:37:54.973069 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:37:54.974122 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:37:54.975369 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:37:54.977336 | orchestrator | 2025-04-17 00:37:54.978778 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-17 00:37:54.979497 | orchestrator | Thursday 17 April 2025 00:37:54 +0000 (0:00:01.928) 0:00:14.900 ******** 2025-04-17 00:37:55.708552 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-17 00:37:56.768120 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-17 00:37:56.768603 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-17 00:37:56.769570 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-17 00:37:56.770767 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-17 00:37:56.772548 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-17 00:37:56.773621 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-17 00:37:56.774656 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-17 00:37:56.776148 | orchestrator | 2025-04-17 00:37:56.777088 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-17 00:37:56.777704 | orchestrator | Thursday 17 April 2025 00:37:56 +0000 (0:00:01.802) 0:00:16.702 ******** 2025-04-17 00:37:58.230156 | orchestrator | ok: [testbed-manager] 2025-04-17 00:37:58.230390 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:37:58.230995 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:37:58.235117 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:37:58.235492 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:37:58.236518 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:37:58.238079 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:37:58.238969 | orchestrator | 2025-04-17 00:37:58.239710 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-17 00:37:58.241115 | orchestrator | Thursday 17 April 2025 00:37:58 +0000 (0:00:01.465) 0:00:18.167 ******** 2025-04-17 00:37:59.642008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 00:37:59.644507 | orchestrator | 2025-04-17 00:37:59.644559 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-17 00:37:59.646105 | orchestrator | Thursday 17 April 2025 00:37:59 +0000 (0:00:01.406) 0:00:19.574 ******** 2025-04-17 00:38:00.163808 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:00.587503 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:38:00.588078 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:38:00.591089 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:38:00.592014 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:38:00.593159 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:38:00.593932 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:38:00.596106 | orchestrator | 2025-04-17 00:38:00.596392 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-17 00:38:00.598225 | orchestrator | Thursday 17 April 2025 00:38:00 +0000 (0:00:00.949) 0:00:20.524 ******** 2025-04-17 00:38:00.738124 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:00.815182 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:38:01.067041 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:38:01.151791 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:38:01.233293 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:38:01.361830 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:38:01.362570 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:38:01.363652 | orchestrator | 2025-04-17 00:38:01.365089 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-17 00:38:01.365838 | orchestrator | Thursday 17 April 2025 00:38:01 +0000 (0:00:00.771) 0:00:21.295 ******** 2025-04-17 00:38:01.774260 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-17 00:38:01.859819 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-17 00:38:01.860078 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-17 00:38:01.860483 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-17 00:38:02.347550 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-17 00:38:02.348580 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-17 00:38:02.351227 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-17 00:38:02.353058 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-17 00:38:02.353619 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-17 00:38:02.354591 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-17 00:38:02.355681 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-17 00:38:02.356306 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-17 00:38:02.358002 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-17 00:38:02.359128 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-17 00:38:02.360023 | orchestrator | 2025-04-17 00:38:02.360919 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-17 00:38:02.361632 | orchestrator | Thursday 17 April 2025 00:38:02 +0000 (0:00:00.988) 0:00:22.284 ******** 2025-04-17 00:38:02.672445 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:38:02.754512 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:38:02.834346 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:38:02.915709 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:38:02.998356 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:38:04.165815 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:38:04.168118 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:38:04.170924 | orchestrator | 2025-04-17 00:38:04.171992 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-17 00:38:04.172045 | orchestrator | Thursday 17 April 2025 00:38:04 +0000 (0:00:01.815) 0:00:24.100 ******** 2025-04-17 00:38:04.326493 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:38:04.411635 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:38:04.686175 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:38:04.767628 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:38:04.850985 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:38:04.894560 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:38:04.895056 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:38:04.896186 | orchestrator | 2025-04-17 00:38:04.897357 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:38:04.897452 | orchestrator | 2025-04-17 00:38:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:38:04.897930 | orchestrator | 2025-04-17 00:38:04 | INFO  | Please wait and do not abort execution. 2025-04-17 00:38:04.898005 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:38:04.898373 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:38:04.898686 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:38:04.899118 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:38:04.899538 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:38:04.899927 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:38:04.900299 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 00:38:04.900705 | orchestrator | 2025-04-17 00:38:04.901163 | orchestrator | Thursday 17 April 2025 00:38:04 +0000 (0:00:00.731) 0:00:24.831 ******** 2025-04-17 00:38:04.901446 | orchestrator | =============================================================================== 2025-04-17 00:38:04.901826 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.93s 2025-04-17 00:38:04.902249 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.86s 2025-04-17 00:38:04.902535 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.84s 2025-04-17 00:38:04.903190 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.82s 2025-04-17 00:38:04.904067 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.80s 2025-04-17 00:38:04.904918 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.70s 2025-04-17 00:38:04.905284 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.60s 2025-04-17 00:38:04.905618 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.47s 2025-04-17 00:38:04.905988 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.41s 2025-04-17 00:38:04.906293 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2025-04-17 00:38:04.906631 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-04-17 00:38:04.906961 | orchestrator | osism.commons.network : Create required directories --------------------- 1.06s 2025-04-17 00:38:04.907333 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.99s 2025-04-17 00:38:04.907591 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 0.96s 2025-04-17 00:38:04.908053 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-04-17 00:38:04.908384 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.77s 2025-04-17 00:38:04.908983 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.74s 2025-04-17 00:38:04.909482 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.73s 2025-04-17 00:38:04.909999 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.69s 2025-04-17 00:38:05.429871 | orchestrator | + osism apply wireguard 2025-04-17 00:38:06.791454 | orchestrator | 2025-04-17 00:38:06 | INFO  | Task c860817c-d5cf-49ed-b15c-14d12c79126c (wireguard) was prepared for execution. 2025-04-17 00:38:09.797103 | orchestrator | 2025-04-17 00:38:06 | INFO  | It takes a moment until task c860817c-d5cf-49ed-b15c-14d12c79126c (wireguard) has been started and output is visible here. 2025-04-17 00:38:09.797283 | orchestrator | 2025-04-17 00:38:09.797557 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-17 00:38:09.797586 | orchestrator | 2025-04-17 00:38:09.797607 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-17 00:38:09.799041 | orchestrator | Thursday 17 April 2025 00:38:09 +0000 (0:00:00.157) 0:00:00.157 ******** 2025-04-17 00:38:11.223955 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:11.224249 | orchestrator | 2025-04-17 00:38:11.224357 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-17 00:38:11.224400 | orchestrator | Thursday 17 April 2025 00:38:11 +0000 (0:00:01.429) 0:00:01.587 ******** 2025-04-17 00:38:16.885964 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:16.886828 | orchestrator | 2025-04-17 00:38:17.410558 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-17 00:38:17.410689 | orchestrator | Thursday 17 April 2025 00:38:16 +0000 (0:00:05.662) 0:00:07.250 ******** 2025-04-17 00:38:17.410727 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:17.411458 | orchestrator | 2025-04-17 00:38:17.411676 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-17 00:38:17.412015 | orchestrator | Thursday 17 April 2025 00:38:17 +0000 (0:00:00.526) 0:00:07.776 ******** 2025-04-17 00:38:17.811118 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:17.812193 | orchestrator | 2025-04-17 00:38:17.813346 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-17 00:38:17.814414 | orchestrator | Thursday 17 April 2025 00:38:17 +0000 (0:00:00.398) 0:00:08.175 ******** 2025-04-17 00:38:18.312374 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:18.314162 | orchestrator | 2025-04-17 00:38:18.314341 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-17 00:38:18.315119 | orchestrator | Thursday 17 April 2025 00:38:18 +0000 (0:00:00.503) 0:00:08.678 ******** 2025-04-17 00:38:18.820619 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:18.822274 | orchestrator | 2025-04-17 00:38:18.823039 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-17 00:38:18.823955 | orchestrator | Thursday 17 April 2025 00:38:18 +0000 (0:00:00.506) 0:00:09.185 ******** 2025-04-17 00:38:19.242119 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:19.242381 | orchestrator | 2025-04-17 00:38:19.244569 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-17 00:38:20.350793 | orchestrator | Thursday 17 April 2025 00:38:19 +0000 (0:00:00.422) 0:00:09.607 ******** 2025-04-17 00:38:20.351012 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:20.351628 | orchestrator | 2025-04-17 00:38:20.352133 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-17 00:38:20.352878 | orchestrator | Thursday 17 April 2025 00:38:20 +0000 (0:00:01.107) 0:00:10.714 ******** 2025-04-17 00:38:21.258791 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-17 00:38:21.258973 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:21.260653 | orchestrator | 2025-04-17 00:38:21.261297 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-17 00:38:21.261736 | orchestrator | Thursday 17 April 2025 00:38:21 +0000 (0:00:00.907) 0:00:11.622 ******** 2025-04-17 00:38:22.964160 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:22.965401 | orchestrator | 2025-04-17 00:38:22.966236 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-17 00:38:22.967994 | orchestrator | Thursday 17 April 2025 00:38:22 +0000 (0:00:01.705) 0:00:13.328 ******** 2025-04-17 00:38:23.898762 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:23.899679 | orchestrator | 2025-04-17 00:38:23.899728 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:38:23.899767 | orchestrator | 2025-04-17 00:38:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:38:23.900032 | orchestrator | 2025-04-17 00:38:23 | INFO  | Please wait and do not abort execution. 2025-04-17 00:38:23.900067 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:38:23.900745 | orchestrator | 2025-04-17 00:38:23.901510 | orchestrator | Thursday 17 April 2025 00:38:23 +0000 (0:00:00.934) 0:00:14.262 ******** 2025-04-17 00:38:23.902089 | orchestrator | =============================================================================== 2025-04-17 00:38:23.902805 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.66s 2025-04-17 00:38:23.903259 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.71s 2025-04-17 00:38:23.904008 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.43s 2025-04-17 00:38:23.904538 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.11s 2025-04-17 00:38:23.904820 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-04-17 00:38:23.905551 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-04-17 00:38:23.905999 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-04-17 00:38:23.906081 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-04-17 00:38:23.906414 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2025-04-17 00:38:23.906615 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-04-17 00:38:23.906717 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-04-17 00:38:24.393603 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-17 00:38:24.427678 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-17 00:38:24.514417 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-17 00:38:24.514567 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 160 0 --:--:-- --:--:-- --:--:-- 162 2025-04-17 00:38:24.530258 | orchestrator | + osism apply --environment custom workarounds 2025-04-17 00:38:25.866948 | orchestrator | 2025-04-17 00:38:25 | INFO  | Trying to run play workarounds in environment custom 2025-04-17 00:38:25.913353 | orchestrator | 2025-04-17 00:38:25 | INFO  | Task 2269b49b-7b94-4e6c-a71d-574653542725 (workarounds) was prepared for execution. 2025-04-17 00:38:28.928509 | orchestrator | 2025-04-17 00:38:25 | INFO  | It takes a moment until task 2269b49b-7b94-4e6c-a71d-574653542725 (workarounds) has been started and output is visible here. 2025-04-17 00:38:28.928782 | orchestrator | 2025-04-17 00:38:28.929077 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 00:38:28.929114 | orchestrator | 2025-04-17 00:38:28.930243 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-17 00:38:28.930368 | orchestrator | Thursday 17 April 2025 00:38:28 +0000 (0:00:00.138) 0:00:00.138 ******** 2025-04-17 00:38:29.084709 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-17 00:38:29.164516 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-17 00:38:29.245565 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-17 00:38:29.327637 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-17 00:38:29.419134 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-17 00:38:29.660370 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-17 00:38:29.661006 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-17 00:38:29.662303 | orchestrator | 2025-04-17 00:38:29.663984 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-17 00:38:29.664087 | orchestrator | 2025-04-17 00:38:29.664544 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-17 00:38:29.664584 | orchestrator | Thursday 17 April 2025 00:38:29 +0000 (0:00:00.734) 0:00:00.873 ******** 2025-04-17 00:38:32.194358 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:32.196539 | orchestrator | 2025-04-17 00:38:32.196594 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-17 00:38:32.198119 | orchestrator | 2025-04-17 00:38:32.198922 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-17 00:38:32.198955 | orchestrator | Thursday 17 April 2025 00:38:32 +0000 (0:00:02.527) 0:00:03.401 ******** 2025-04-17 00:38:33.898635 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:38:33.898966 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:38:33.899967 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:38:33.900563 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:38:33.901443 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:38:33.901872 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:38:33.902679 | orchestrator | 2025-04-17 00:38:33.903190 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-17 00:38:33.903747 | orchestrator | 2025-04-17 00:38:33.904248 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-17 00:38:33.904879 | orchestrator | Thursday 17 April 2025 00:38:33 +0000 (0:00:01.707) 0:00:05.108 ******** 2025-04-17 00:38:35.350868 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-17 00:38:35.351119 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-17 00:38:35.352754 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-17 00:38:35.353927 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-17 00:38:35.355727 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-17 00:38:35.356421 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-17 00:38:35.358408 | orchestrator | 2025-04-17 00:38:35.359481 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-17 00:38:35.360339 | orchestrator | Thursday 17 April 2025 00:38:35 +0000 (0:00:01.451) 0:00:06.560 ******** 2025-04-17 00:38:39.084114 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:38:39.084382 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:38:39.085076 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:38:39.087712 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:38:39.088078 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:38:39.088107 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:38:39.088128 | orchestrator | 2025-04-17 00:38:39.089033 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-17 00:38:39.089814 | orchestrator | Thursday 17 April 2025 00:38:39 +0000 (0:00:03.735) 0:00:10.296 ******** 2025-04-17 00:38:39.245835 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:38:39.318448 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:38:39.416261 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:38:39.631772 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:38:39.783595 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:38:39.784624 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:38:39.784677 | orchestrator | 2025-04-17 00:38:39.786478 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-17 00:38:39.787312 | orchestrator | 2025-04-17 00:38:39.788354 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-17 00:38:39.789666 | orchestrator | Thursday 17 April 2025 00:38:39 +0000 (0:00:00.697) 0:00:10.993 ******** 2025-04-17 00:38:41.366308 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:41.370322 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:38:41.372485 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:38:41.372547 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:38:41.372565 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:38:41.372591 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:38:41.373044 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:38:41.373601 | orchestrator | 2025-04-17 00:38:41.374311 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-17 00:38:41.374869 | orchestrator | Thursday 17 April 2025 00:38:41 +0000 (0:00:01.584) 0:00:12.578 ******** 2025-04-17 00:38:43.081625 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:43.081828 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:38:43.082012 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:38:43.082460 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:38:43.084820 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:38:43.085222 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:38:43.085328 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:38:43.085979 | orchestrator | 2025-04-17 00:38:43.086616 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-17 00:38:43.087012 | orchestrator | Thursday 17 April 2025 00:38:43 +0000 (0:00:01.713) 0:00:14.291 ******** 2025-04-17 00:38:44.498389 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:38:44.498679 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:38:44.499316 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:38:44.500543 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:38:44.501977 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:38:44.502434 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:44.504400 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:38:44.504882 | orchestrator | 2025-04-17 00:38:44.505669 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-17 00:38:44.506347 | orchestrator | Thursday 17 April 2025 00:38:44 +0000 (0:00:01.419) 0:00:15.711 ******** 2025-04-17 00:38:46.201480 | orchestrator | changed: [testbed-manager] 2025-04-17 00:38:46.203617 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:38:46.205062 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:38:46.205115 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:38:46.206109 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:38:46.207601 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:38:46.208314 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:38:46.209365 | orchestrator | 2025-04-17 00:38:46.210257 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-17 00:38:46.211204 | orchestrator | Thursday 17 April 2025 00:38:46 +0000 (0:00:01.701) 0:00:17.412 ******** 2025-04-17 00:38:46.349601 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:38:46.428814 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:38:46.499964 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:38:46.571640 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:38:46.780510 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:38:46.922985 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:38:46.927441 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:38:46.930158 | orchestrator | 2025-04-17 00:38:46.930221 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-17 00:38:46.930255 | orchestrator | 2025-04-17 00:38:49.345221 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-17 00:38:49.345404 | orchestrator | Thursday 17 April 2025 00:38:46 +0000 (0:00:00.720) 0:00:18.132 ******** 2025-04-17 00:38:49.345448 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:38:49.345532 | orchestrator | ok: [testbed-manager] 2025-04-17 00:38:49.347055 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:38:49.347638 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:38:49.348754 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:38:49.350304 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:38:49.350714 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:38:49.351471 | orchestrator | 2025-04-17 00:38:49.352471 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:38:49.352571 | orchestrator | 2025-04-17 00:38:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:38:49.353587 | orchestrator | 2025-04-17 00:38:49 | INFO  | Please wait and do not abort execution. 2025-04-17 00:38:49.353647 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:38:49.353785 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:49.354304 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:49.354751 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:49.355338 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:49.355674 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:49.356220 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:49.357175 | orchestrator | 2025-04-17 00:38:49.357409 | orchestrator | Thursday 17 April 2025 00:38:49 +0000 (0:00:02.424) 0:00:20.557 ******** 2025-04-17 00:38:49.357822 | orchestrator | =============================================================================== 2025-04-17 00:38:49.358309 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.74s 2025-04-17 00:38:49.359680 | orchestrator | Apply netplan configuration --------------------------------------------- 2.53s 2025-04-17 00:38:49.360526 | orchestrator | Install python3-docker -------------------------------------------------- 2.42s 2025-04-17 00:38:49.361023 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.71s 2025-04-17 00:38:49.361971 | orchestrator | Apply netplan configuration --------------------------------------------- 1.71s 2025-04-17 00:38:49.362275 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.70s 2025-04-17 00:38:49.363097 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.58s 2025-04-17 00:38:49.363531 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.45s 2025-04-17 00:38:49.363561 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.42s 2025-04-17 00:38:49.364440 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2025-04-17 00:38:49.364885 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.72s 2025-04-17 00:38:49.365265 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2025-04-17 00:38:49.880766 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-17 00:38:51.316066 | orchestrator | 2025-04-17 00:38:51 | INFO  | Task 6be31fe9-ede6-409b-a097-773bae89bff6 (reboot) was prepared for execution. 2025-04-17 00:38:54.309306 | orchestrator | 2025-04-17 00:38:51 | INFO  | It takes a moment until task 6be31fe9-ede6-409b-a097-773bae89bff6 (reboot) has been started and output is visible here. 2025-04-17 00:38:54.309551 | orchestrator | 2025-04-17 00:38:54.309793 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-17 00:38:54.311162 | orchestrator | 2025-04-17 00:38:54.313209 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-17 00:38:54.313634 | orchestrator | Thursday 17 April 2025 00:38:54 +0000 (0:00:00.141) 0:00:00.141 ******** 2025-04-17 00:38:54.403712 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:38:54.404073 | orchestrator | 2025-04-17 00:38:54.404122 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-17 00:38:54.404413 | orchestrator | Thursday 17 April 2025 00:38:54 +0000 (0:00:00.098) 0:00:00.240 ******** 2025-04-17 00:38:55.303344 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:38:55.303599 | orchestrator | 2025-04-17 00:38:55.304687 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-17 00:38:55.305349 | orchestrator | Thursday 17 April 2025 00:38:55 +0000 (0:00:00.898) 0:00:01.138 ******** 2025-04-17 00:38:55.421230 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:38:55.421381 | orchestrator | 2025-04-17 00:38:55.422181 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-17 00:38:55.422458 | orchestrator | 2025-04-17 00:38:55.423038 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-17 00:38:55.423381 | orchestrator | Thursday 17 April 2025 00:38:55 +0000 (0:00:00.114) 0:00:01.253 ******** 2025-04-17 00:38:55.532097 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:38:55.532361 | orchestrator | 2025-04-17 00:38:55.533462 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-17 00:38:55.534005 | orchestrator | Thursday 17 April 2025 00:38:55 +0000 (0:00:00.114) 0:00:01.367 ******** 2025-04-17 00:38:56.167211 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:38:56.167857 | orchestrator | 2025-04-17 00:38:56.168530 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-17 00:38:56.169729 | orchestrator | Thursday 17 April 2025 00:38:56 +0000 (0:00:00.634) 0:00:02.002 ******** 2025-04-17 00:38:56.280179 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:38:56.280413 | orchestrator | 2025-04-17 00:38:56.281700 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-17 00:38:56.282300 | orchestrator | 2025-04-17 00:38:56.284289 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-17 00:38:56.389856 | orchestrator | Thursday 17 April 2025 00:38:56 +0000 (0:00:00.111) 0:00:02.113 ******** 2025-04-17 00:38:56.390042 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:38:56.390778 | orchestrator | 2025-04-17 00:38:56.393159 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-17 00:38:57.183657 | orchestrator | Thursday 17 April 2025 00:38:56 +0000 (0:00:00.111) 0:00:02.224 ******** 2025-04-17 00:38:57.183866 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:38:57.184480 | orchestrator | 2025-04-17 00:38:57.184529 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-17 00:38:57.185062 | orchestrator | Thursday 17 April 2025 00:38:57 +0000 (0:00:00.794) 0:00:03.019 ******** 2025-04-17 00:38:57.307488 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:38:57.308370 | orchestrator | 2025-04-17 00:38:57.308418 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-17 00:38:57.308641 | orchestrator | 2025-04-17 00:38:57.309464 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-17 00:38:57.311075 | orchestrator | Thursday 17 April 2025 00:38:57 +0000 (0:00:00.124) 0:00:03.143 ******** 2025-04-17 00:38:57.421878 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:38:57.422244 | orchestrator | 2025-04-17 00:38:57.423109 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-17 00:38:57.423816 | orchestrator | Thursday 17 April 2025 00:38:57 +0000 (0:00:00.112) 0:00:03.256 ******** 2025-04-17 00:38:58.091876 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:38:58.092116 | orchestrator | 2025-04-17 00:38:58.092563 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-17 00:38:58.093338 | orchestrator | Thursday 17 April 2025 00:38:58 +0000 (0:00:00.668) 0:00:03.925 ******** 2025-04-17 00:38:58.202292 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:38:58.205043 | orchestrator | 2025-04-17 00:38:58.205221 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-17 00:38:58.205990 | orchestrator | 2025-04-17 00:38:58.206307 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-17 00:38:58.207151 | orchestrator | Thursday 17 April 2025 00:38:58 +0000 (0:00:00.109) 0:00:04.034 ******** 2025-04-17 00:38:58.313484 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:38:58.313675 | orchestrator | 2025-04-17 00:38:58.314305 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-17 00:38:58.315771 | orchestrator | Thursday 17 April 2025 00:38:58 +0000 (0:00:00.113) 0:00:04.148 ******** 2025-04-17 00:38:58.954321 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:38:58.954998 | orchestrator | 2025-04-17 00:38:58.957458 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-17 00:38:59.062881 | orchestrator | Thursday 17 April 2025 00:38:58 +0000 (0:00:00.639) 0:00:04.787 ******** 2025-04-17 00:38:59.063118 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:38:59.063196 | orchestrator | 2025-04-17 00:38:59.064380 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-17 00:38:59.065363 | orchestrator | 2025-04-17 00:38:59.065672 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-17 00:38:59.067197 | orchestrator | Thursday 17 April 2025 00:38:59 +0000 (0:00:00.111) 0:00:04.899 ******** 2025-04-17 00:38:59.165484 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:38:59.166182 | orchestrator | 2025-04-17 00:38:59.166228 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-17 00:38:59.167259 | orchestrator | Thursday 17 April 2025 00:38:59 +0000 (0:00:00.101) 0:00:05.001 ******** 2025-04-17 00:38:59.784035 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:38:59.784437 | orchestrator | 2025-04-17 00:38:59.785839 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-17 00:38:59.786769 | orchestrator | Thursday 17 April 2025 00:38:59 +0000 (0:00:00.617) 0:00:05.618 ******** 2025-04-17 00:38:59.820668 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:38:59.824610 | orchestrator | 2025-04-17 00:38:59.825431 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:38:59.826967 | orchestrator | 2025-04-17 00:38:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:38:59.827044 | orchestrator | 2025-04-17 00:38:59 | INFO  | Please wait and do not abort execution. 2025-04-17 00:38:59.831284 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:59.832596 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:59.834098 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:59.834174 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:59.834258 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:59.834851 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:38:59.835386 | orchestrator | 2025-04-17 00:38:59.836018 | orchestrator | Thursday 17 April 2025 00:38:59 +0000 (0:00:00.037) 0:00:05.656 ******** 2025-04-17 00:38:59.836423 | orchestrator | =============================================================================== 2025-04-17 00:38:59.837880 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2025-04-17 00:39:00.317678 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.65s 2025-04-17 00:39:00.317794 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2025-04-17 00:39:00.317819 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-17 00:39:01.866376 | orchestrator | 2025-04-17 00:39:01 | INFO  | Task 539ae589-c661-4c4b-962e-41e6bc870fde (wait-for-connection) was prepared for execution. 2025-04-17 00:39:05.054306 | orchestrator | 2025-04-17 00:39:01 | INFO  | It takes a moment until task 539ae589-c661-4c4b-962e-41e6bc870fde (wait-for-connection) has been started and output is visible here. 2025-04-17 00:39:05.054490 | orchestrator | 2025-04-17 00:39:05.058881 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-17 00:39:05.058965 | orchestrator | 2025-04-17 00:39:05.058982 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-17 00:39:05.059009 | orchestrator | Thursday 17 April 2025 00:39:05 +0000 (0:00:00.184) 0:00:00.184 ******** 2025-04-17 00:39:17.606541 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:39:17.606740 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:39:17.606767 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:39:17.606782 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:39:17.606796 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:39:17.606810 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:39:17.606825 | orchestrator | 2025-04-17 00:39:17.606840 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:39:17.606890 | orchestrator | 2025-04-17 00:39:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:39:17.608986 | orchestrator | 2025-04-17 00:39:17 | INFO  | Please wait and do not abort execution. 2025-04-17 00:39:17.609028 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:39:17.611048 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:39:17.611080 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:39:17.611095 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:39:17.611139 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:39:17.611154 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:39:17.611168 | orchestrator | 2025-04-17 00:39:17.611182 | orchestrator | Thursday 17 April 2025 00:39:17 +0000 (0:00:12.546) 0:00:12.730 ******** 2025-04-17 00:39:17.611204 | orchestrator | =============================================================================== 2025-04-17 00:39:18.102683 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.55s 2025-04-17 00:39:18.102877 | orchestrator | + osism apply hddtemp 2025-04-17 00:39:19.480260 | orchestrator | 2025-04-17 00:39:19 | INFO  | Task 65d6e92c-0479-46e0-b1e1-097c7a082a40 (hddtemp) was prepared for execution. 2025-04-17 00:39:22.633825 | orchestrator | 2025-04-17 00:39:19 | INFO  | It takes a moment until task 65d6e92c-0479-46e0-b1e1-097c7a082a40 (hddtemp) has been started and output is visible here. 2025-04-17 00:39:22.634077 | orchestrator | 2025-04-17 00:39:22.635601 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-17 00:39:22.637886 | orchestrator | 2025-04-17 00:39:22.637969 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-17 00:39:22.639488 | orchestrator | Thursday 17 April 2025 00:39:22 +0000 (0:00:00.195) 0:00:00.195 ******** 2025-04-17 00:39:22.779431 | orchestrator | ok: [testbed-manager] 2025-04-17 00:39:22.865559 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:39:22.940506 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:39:23.015058 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:39:23.093024 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:39:23.319443 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:39:23.319677 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:39:23.320438 | orchestrator | 2025-04-17 00:39:23.321660 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-17 00:39:23.321893 | orchestrator | Thursday 17 April 2025 00:39:23 +0000 (0:00:00.684) 0:00:00.879 ******** 2025-04-17 00:39:24.452580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 00:39:24.452857 | orchestrator | 2025-04-17 00:39:24.453715 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-17 00:39:24.454611 | orchestrator | Thursday 17 April 2025 00:39:24 +0000 (0:00:01.132) 0:00:02.012 ******** 2025-04-17 00:39:26.362194 | orchestrator | ok: [testbed-manager] 2025-04-17 00:39:26.362568 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:39:26.363137 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:39:26.366343 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:39:26.368288 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:39:26.368318 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:39:26.368338 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:39:26.368657 | orchestrator | 2025-04-17 00:39:26.369538 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-17 00:39:26.369991 | orchestrator | Thursday 17 April 2025 00:39:26 +0000 (0:00:01.912) 0:00:03.924 ******** 2025-04-17 00:39:27.011245 | orchestrator | changed: [testbed-manager] 2025-04-17 00:39:27.099603 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:39:27.552410 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:39:27.552536 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:39:27.553045 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:39:27.553697 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:39:27.554235 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:39:27.555235 | orchestrator | 2025-04-17 00:39:27.555332 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-17 00:39:27.555709 | orchestrator | Thursday 17 April 2025 00:39:27 +0000 (0:00:01.184) 0:00:05.108 ******** 2025-04-17 00:39:29.121300 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:39:29.125049 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:39:29.125980 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:39:29.126990 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:39:29.128259 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:39:29.128894 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:39:29.130091 | orchestrator | ok: [testbed-manager] 2025-04-17 00:39:29.130797 | orchestrator | 2025-04-17 00:39:29.131335 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-17 00:39:29.131953 | orchestrator | Thursday 17 April 2025 00:39:29 +0000 (0:00:01.570) 0:00:06.679 ******** 2025-04-17 00:39:29.382225 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:39:29.469363 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:39:29.550354 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:39:29.626433 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:39:29.747108 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:39:29.747704 | orchestrator | changed: [testbed-manager] 2025-04-17 00:39:29.748338 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:39:29.749094 | orchestrator | 2025-04-17 00:39:29.749828 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-17 00:39:29.750878 | orchestrator | Thursday 17 April 2025 00:39:29 +0000 (0:00:00.632) 0:00:07.311 ******** 2025-04-17 00:39:41.684212 | orchestrator | changed: [testbed-manager] 2025-04-17 00:39:42.831594 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:39:42.831725 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:39:42.831745 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:39:42.831760 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:39:42.831775 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:39:42.831790 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:39:42.831805 | orchestrator | 2025-04-17 00:39:42.831822 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-17 00:39:42.831843 | orchestrator | Thursday 17 April 2025 00:39:41 +0000 (0:00:11.916) 0:00:19.228 ******** 2025-04-17 00:39:42.831892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 00:39:42.832186 | orchestrator | 2025-04-17 00:39:42.833301 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-17 00:39:42.834122 | orchestrator | Thursday 17 April 2025 00:39:42 +0000 (0:00:01.162) 0:00:20.390 ******** 2025-04-17 00:39:44.633622 | orchestrator | changed: [testbed-manager] 2025-04-17 00:39:44.634206 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:39:44.634568 | orchestrator | changed: [testbed-node-1] 2025-04-17 00:39:44.635978 | orchestrator | changed: [testbed-node-0] 2025-04-17 00:39:44.637047 | orchestrator | changed: [testbed-node-2] 2025-04-17 00:39:44.638112 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:39:44.638228 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:39:44.640853 | orchestrator | 2025-04-17 00:39:44.641850 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:39:44.641901 | orchestrator | 2025-04-17 00:39:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:39:44.643095 | orchestrator | 2025-04-17 00:39:44 | INFO  | Please wait and do not abort execution. 2025-04-17 00:39:44.643262 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:39:44.643352 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:44.643695 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:44.644278 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:44.644783 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:44.645744 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:44.646307 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:44.646345 | orchestrator | 2025-04-17 00:39:44.646603 | orchestrator | Thursday 17 April 2025 00:39:44 +0000 (0:00:01.804) 0:00:22.195 ******** 2025-04-17 00:39:44.647131 | orchestrator | =============================================================================== 2025-04-17 00:39:44.647627 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.92s 2025-04-17 00:39:44.648067 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2025-04-17 00:39:44.648426 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.80s 2025-04-17 00:39:44.648968 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.57s 2025-04-17 00:39:44.649343 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-04-17 00:39:44.649725 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2025-04-17 00:39:44.650110 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.13s 2025-04-17 00:39:44.650417 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-04-17 00:39:44.650689 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.63s 2025-04-17 00:39:45.208858 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-17 00:39:46.651143 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-17 00:39:46.651412 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-17 00:39:46.651439 | orchestrator | + local max_attempts=60 2025-04-17 00:39:46.651456 | orchestrator | + local name=ceph-ansible 2025-04-17 00:39:46.651470 | orchestrator | + local attempt_num=1 2025-04-17 00:39:46.651492 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-17 00:39:46.683839 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-17 00:39:46.684723 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-17 00:39:46.684761 | orchestrator | + local max_attempts=60 2025-04-17 00:39:46.684778 | orchestrator | + local name=kolla-ansible 2025-04-17 00:39:46.684792 | orchestrator | + local attempt_num=1 2025-04-17 00:39:46.684815 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-17 00:39:46.722991 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-17 00:39:46.724492 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-17 00:39:46.724535 | orchestrator | + local max_attempts=60 2025-04-17 00:39:46.724551 | orchestrator | + local name=osism-ansible 2025-04-17 00:39:46.724565 | orchestrator | + local attempt_num=1 2025-04-17 00:39:46.724586 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-17 00:39:46.759789 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-17 00:39:46.928645 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-17 00:39:46.928826 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-17 00:39:46.928884 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-17 00:39:47.109431 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-17 00:39:47.281064 | orchestrator | ARA in osism-ansible already disabled. 2025-04-17 00:39:47.423711 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-17 00:39:47.424556 | orchestrator | + osism apply gather-facts 2025-04-17 00:39:48.854596 | orchestrator | 2025-04-17 00:39:48 | INFO  | Task bcc5e835-83b8-4354-9528-bdbade4ff445 (gather-facts) was prepared for execution. 2025-04-17 00:39:51.937111 | orchestrator | 2025-04-17 00:39:48 | INFO  | It takes a moment until task bcc5e835-83b8-4354-9528-bdbade4ff445 (gather-facts) has been started and output is visible here. 2025-04-17 00:39:51.938185 | orchestrator | 2025-04-17 00:39:51.939999 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-17 00:39:51.940079 | orchestrator | 2025-04-17 00:39:51.940101 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-17 00:39:51.940710 | orchestrator | Thursday 17 April 2025 00:39:51 +0000 (0:00:00.163) 0:00:00.163 ******** 2025-04-17 00:39:57.615464 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:39:57.616391 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:39:57.616801 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:39:57.617483 | orchestrator | ok: [testbed-manager] 2025-04-17 00:39:57.620788 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:39:57.621120 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:39:57.621734 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:39:57.622242 | orchestrator | 2025-04-17 00:39:57.622783 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-17 00:39:57.622863 | orchestrator | 2025-04-17 00:39:57.623406 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-17 00:39:57.623496 | orchestrator | Thursday 17 April 2025 00:39:57 +0000 (0:00:05.682) 0:00:05.846 ******** 2025-04-17 00:39:57.761321 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:39:57.830838 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:39:57.904802 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:39:57.979892 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:39:58.054298 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:39:58.082802 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:39:58.083107 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:39:58.084465 | orchestrator | 2025-04-17 00:39:58.085095 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:39:58.085280 | orchestrator | 2025-04-17 00:39:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:39:58.085466 | orchestrator | 2025-04-17 00:39:58 | INFO  | Please wait and do not abort execution. 2025-04-17 00:39:58.086183 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:58.086677 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:58.087164 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:58.088044 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:58.088638 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:58.089091 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:58.089437 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 00:39:58.089831 | orchestrator | 2025-04-17 00:39:58.090219 | orchestrator | Thursday 17 April 2025 00:39:58 +0000 (0:00:00.468) 0:00:06.315 ******** 2025-04-17 00:39:58.090518 | orchestrator | =============================================================================== 2025-04-17 00:39:58.090890 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.68s 2025-04-17 00:39:58.091552 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-04-17 00:39:58.575030 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-17 00:39:58.594345 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-17 00:39:58.608974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-17 00:39:58.627414 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-17 00:39:58.639524 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-17 00:39:58.650851 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-17 00:39:58.666901 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-17 00:39:58.680992 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-17 00:39:58.692349 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-17 00:39:58.703700 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-17 00:39:58.715072 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-17 00:39:58.726406 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-17 00:39:58.737279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-17 00:39:58.748387 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-17 00:39:58.759497 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-17 00:39:58.770300 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-17 00:39:58.780114 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-17 00:39:58.790898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-17 00:39:58.800237 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-17 00:39:58.809235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-17 00:39:58.822134 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-17 00:39:59.240588 | orchestrator | changed 2025-04-17 00:39:59.311701 | 2025-04-17 00:39:59.311868 | TASK [Deploy services] 2025-04-17 00:39:59.484290 | orchestrator | skipping: Conditional result was False 2025-04-17 00:39:59.542194 | 2025-04-17 00:39:59.542335 | TASK [Deploy in a nutshell] 2025-04-17 00:40:00.225675 | orchestrator | + set -e 2025-04-17 00:40:00.227061 | orchestrator | 2025-04-17 00:40:00.227380 | orchestrator | # PULL IMAGES 2025-04-17 00:40:00.227399 | orchestrator | 2025-04-17 00:40:00.227443 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-17 00:40:00.227462 | orchestrator | ++ export INTERACTIVE=false 2025-04-17 00:40:00.227476 | orchestrator | ++ INTERACTIVE=false 2025-04-17 00:40:00.227497 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-17 00:40:00.227517 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-17 00:40:00.227530 | orchestrator | + source /opt/manager-vars.sh 2025-04-17 00:40:00.227541 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-17 00:40:00.227552 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-17 00:40:00.227563 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-17 00:40:00.227573 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-17 00:40:00.227584 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-17 00:40:00.227596 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-17 00:40:00.227607 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-17 00:40:00.227619 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-17 00:40:00.227630 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-17 00:40:00.227641 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-17 00:40:00.227652 | orchestrator | ++ export ARA=false 2025-04-17 00:40:00.227663 | orchestrator | ++ ARA=false 2025-04-17 00:40:00.227674 | orchestrator | ++ export TEMPEST=false 2025-04-17 00:40:00.227684 | orchestrator | ++ TEMPEST=false 2025-04-17 00:40:00.227696 | orchestrator | ++ export IS_ZUUL=true 2025-04-17 00:40:00.227707 | orchestrator | ++ IS_ZUUL=true 2025-04-17 00:40:00.227718 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.47 2025-04-17 00:40:00.227729 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.47 2025-04-17 00:40:00.227740 | orchestrator | ++ export EXTERNAL_API=false 2025-04-17 00:40:00.227751 | orchestrator | ++ EXTERNAL_API=false 2025-04-17 00:40:00.227762 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-17 00:40:00.227773 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-17 00:40:00.227790 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-17 00:40:00.227802 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-17 00:40:00.227812 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-17 00:40:00.227823 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-17 00:40:00.227834 | orchestrator | + echo 2025-04-17 00:40:00.227845 | orchestrator | + echo '# PULL IMAGES' 2025-04-17 00:40:00.227856 | orchestrator | + echo 2025-04-17 00:40:00.227873 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-17 00:40:00.276790 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-17 00:40:01.718881 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-17 00:40:01.719098 | orchestrator | 2025-04-17 00:40:01 | INFO  | Trying to run play pull-images in environment custom 2025-04-17 00:40:01.768683 | orchestrator | 2025-04-17 00:40:01 | INFO  | Task 96417f70-431f-4e19-8514-2ac6720dfaf9 (pull-images) was prepared for execution. 2025-04-17 00:40:04.895282 | orchestrator | 2025-04-17 00:40:01 | INFO  | It takes a moment until task 96417f70-431f-4e19-8514-2ac6720dfaf9 (pull-images) has been started and output is visible here. 2025-04-17 00:40:04.895467 | orchestrator | 2025-04-17 00:40:04.898601 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-17 00:40:04.898728 | orchestrator | 2025-04-17 00:40:04.901170 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-17 00:40:04.901773 | orchestrator | Thursday 17 April 2025 00:40:04 +0000 (0:00:00.144) 0:00:00.144 ******** 2025-04-17 00:40:41.098664 | orchestrator | changed: [testbed-manager] 2025-04-17 00:41:27.945518 | orchestrator | 2025-04-17 00:41:27.945677 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-17 00:41:27.945698 | orchestrator | Thursday 17 April 2025 00:40:41 +0000 (0:00:36.210) 0:00:36.354 ******** 2025-04-17 00:41:27.945723 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-17 00:41:27.946213 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-17 00:41:27.946282 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-17 00:41:27.946295 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-17 00:41:27.946332 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-17 00:41:27.947366 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-17 00:41:27.947570 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-17 00:41:27.948648 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-17 00:41:27.949500 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-17 00:41:27.950229 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-17 00:41:27.950913 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-17 00:41:27.951557 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-17 00:41:27.952219 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-17 00:41:27.952841 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-17 00:41:27.953433 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-17 00:41:27.954005 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-17 00:41:27.954643 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-17 00:41:27.955615 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-17 00:41:27.956707 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-17 00:41:27.957087 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-17 00:41:27.957114 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-17 00:41:27.957127 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-17 00:41:27.957687 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-17 00:41:27.958139 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-17 00:41:27.958632 | orchestrator | 2025-04-17 00:41:27.959276 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:41:27.959821 | orchestrator | 2025-04-17 00:41:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:41:27.960140 | orchestrator | 2025-04-17 00:41:27 | INFO  | Please wait and do not abort execution. 2025-04-17 00:41:27.962606 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 00:41:29.943444 | orchestrator | 2025-04-17 00:41:29.943574 | orchestrator | Thursday 17 April 2025 00:41:27 +0000 (0:00:46.835) 0:01:23.190 ******** 2025-04-17 00:41:29.943596 | orchestrator | =============================================================================== 2025-04-17 00:41:29.943611 | orchestrator | Pull other images ------------------------------------------------------ 46.84s 2025-04-17 00:41:29.943626 | orchestrator | Pull keystone image ---------------------------------------------------- 36.21s 2025-04-17 00:41:29.943658 | orchestrator | 2025-04-17 00:41:29 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-17 00:41:29.987924 | orchestrator | 2025-04-17 00:41:29 | INFO  | Task e0a3f1c1-ecdb-40de-9e3d-7de7a84868ea (wipe-partitions) was prepared for execution. 2025-04-17 00:41:32.887169 | orchestrator | 2025-04-17 00:41:29 | INFO  | It takes a moment until task e0a3f1c1-ecdb-40de-9e3d-7de7a84868ea (wipe-partitions) has been started and output is visible here. 2025-04-17 00:41:32.887292 | orchestrator | 2025-04-17 00:41:32.887372 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-17 00:41:32.887399 | orchestrator | 2025-04-17 00:41:32.887639 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-17 00:41:32.887924 | orchestrator | Thursday 17 April 2025 00:41:32 +0000 (0:00:00.088) 0:00:00.088 ******** 2025-04-17 00:41:33.371028 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:41:33.371907 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:41:33.372128 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:41:33.372356 | orchestrator | 2025-04-17 00:41:33.372551 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-17 00:41:33.373858 | orchestrator | Thursday 17 April 2025 00:41:33 +0000 (0:00:00.486) 0:00:00.575 ******** 2025-04-17 00:41:33.483158 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:41:33.574364 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:41:33.575456 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:41:33.576095 | orchestrator | 2025-04-17 00:41:33.576401 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-17 00:41:33.576911 | orchestrator | Thursday 17 April 2025 00:41:33 +0000 (0:00:00.202) 0:00:00.778 ******** 2025-04-17 00:41:34.180966 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:41:34.181167 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:41:34.181200 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:41:34.181325 | orchestrator | 2025-04-17 00:41:34.181429 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-17 00:41:34.183136 | orchestrator | Thursday 17 April 2025 00:41:34 +0000 (0:00:00.600) 0:00:01.378 ******** 2025-04-17 00:41:34.341904 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:41:34.422525 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:41:34.424092 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:41:34.424291 | orchestrator | 2025-04-17 00:41:34.424637 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-17 00:41:34.424863 | orchestrator | Thursday 17 April 2025 00:41:34 +0000 (0:00:00.248) 0:00:01.626 ******** 2025-04-17 00:41:35.602169 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-17 00:41:35.602373 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-17 00:41:35.602682 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-17 00:41:35.602713 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-17 00:41:35.602999 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-17 00:41:35.603199 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-17 00:41:35.603467 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-17 00:41:35.603873 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-17 00:41:35.604141 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-17 00:41:35.604218 | orchestrator | 2025-04-17 00:41:35.604507 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-17 00:41:35.605038 | orchestrator | Thursday 17 April 2025 00:41:35 +0000 (0:00:01.176) 0:00:02.803 ******** 2025-04-17 00:41:36.851788 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-17 00:41:36.851937 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-17 00:41:36.851967 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-17 00:41:36.852167 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-17 00:41:36.852436 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-17 00:41:36.852734 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-17 00:41:36.853054 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-17 00:41:36.853361 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-17 00:41:36.853664 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-17 00:41:36.853905 | orchestrator | 2025-04-17 00:41:36.854754 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-17 00:41:36.855179 | orchestrator | Thursday 17 April 2025 00:41:36 +0000 (0:00:01.250) 0:00:04.054 ******** 2025-04-17 00:41:40.065888 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-17 00:41:40.066767 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-17 00:41:40.067503 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-17 00:41:40.068342 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-17 00:41:40.069777 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-17 00:41:40.070455 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-17 00:41:40.071220 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-17 00:41:40.071855 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-17 00:41:40.074458 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-17 00:41:40.678671 | orchestrator | 2025-04-17 00:41:40.678776 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-17 00:41:40.678796 | orchestrator | Thursday 17 April 2025 00:41:40 +0000 (0:00:03.211) 0:00:07.265 ******** 2025-04-17 00:41:40.678851 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:41:40.679239 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:41:40.680265 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:41:40.681256 | orchestrator | 2025-04-17 00:41:40.681600 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-17 00:41:40.682310 | orchestrator | Thursday 17 April 2025 00:41:40 +0000 (0:00:00.614) 0:00:07.880 ******** 2025-04-17 00:41:41.285597 | orchestrator | changed: [testbed-node-3] 2025-04-17 00:41:41.286113 | orchestrator | changed: [testbed-node-4] 2025-04-17 00:41:41.287427 | orchestrator | changed: [testbed-node-5] 2025-04-17 00:41:41.288731 | orchestrator | 2025-04-17 00:41:41.289155 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:41:41.290234 | orchestrator | 2025-04-17 00:41:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:41:41.290625 | orchestrator | 2025-04-17 00:41:41 | INFO  | Please wait and do not abort execution. 2025-04-17 00:41:41.291579 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:41.292335 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:41.293015 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:41.293528 | orchestrator | 2025-04-17 00:41:41.294106 | orchestrator | Thursday 17 April 2025 00:41:41 +0000 (0:00:00.605) 0:00:08.486 ******** 2025-04-17 00:41:41.294583 | orchestrator | =============================================================================== 2025-04-17 00:41:41.295142 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.21s 2025-04-17 00:41:41.295636 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.25s 2025-04-17 00:41:41.296203 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-04-17 00:41:41.296666 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-04-17 00:41:41.297202 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-04-17 00:41:41.297925 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2025-04-17 00:41:41.298728 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.49s 2025-04-17 00:41:41.299317 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-04-17 00:41:41.299651 | orchestrator | Remove all rook related logical devices --------------------------------- 0.20s 2025-04-17 00:41:42.852150 | orchestrator | 2025-04-17 00:41:42 | INFO  | Task 85d716c8-7031-41a8-8c1f-022295b83d34 (facts) was prepared for execution. 2025-04-17 00:41:46.896923 | orchestrator | 2025-04-17 00:41:42 | INFO  | It takes a moment until task 85d716c8-7031-41a8-8c1f-022295b83d34 (facts) has been started and output is visible here. 2025-04-17 00:41:46.897221 | orchestrator | 2025-04-17 00:41:46.898197 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-17 00:41:46.898307 | orchestrator | 2025-04-17 00:41:46.899086 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-17 00:41:46.900581 | orchestrator | Thursday 17 April 2025 00:41:46 +0000 (0:00:00.264) 0:00:00.264 ******** 2025-04-17 00:41:47.905251 | orchestrator | ok: [testbed-manager] 2025-04-17 00:41:47.906772 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:41:47.906825 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:41:47.908235 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:41:47.908792 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:41:47.908822 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:41:47.912292 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:41:47.912428 | orchestrator | 2025-04-17 00:41:47.913957 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-17 00:41:47.914126 | orchestrator | Thursday 17 April 2025 00:41:47 +0000 (0:00:01.010) 0:00:01.274 ******** 2025-04-17 00:41:48.056542 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:41:48.125501 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:41:48.198398 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:41:48.268548 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:41:48.334070 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:41:48.948761 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:41:48.948901 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:41:48.949480 | orchestrator | 2025-04-17 00:41:48.950432 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-17 00:41:48.950863 | orchestrator | 2025-04-17 00:41:48.951088 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-17 00:41:48.951583 | orchestrator | Thursday 17 April 2025 00:41:48 +0000 (0:00:01.044) 0:00:02.319 ******** 2025-04-17 00:41:53.503365 | orchestrator | ok: [testbed-node-1] 2025-04-17 00:41:53.503843 | orchestrator | ok: [testbed-node-0] 2025-04-17 00:41:53.505093 | orchestrator | ok: [testbed-node-2] 2025-04-17 00:41:53.506548 | orchestrator | ok: [testbed-manager] 2025-04-17 00:41:53.510622 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:41:53.511825 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:41:53.513008 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:41:53.514218 | orchestrator | 2025-04-17 00:41:53.514686 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-17 00:41:53.517112 | orchestrator | 2025-04-17 00:41:53.518172 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-17 00:41:53.518837 | orchestrator | Thursday 17 April 2025 00:41:53 +0000 (0:00:04.556) 0:00:06.875 ******** 2025-04-17 00:41:53.970537 | orchestrator | skipping: [testbed-manager] 2025-04-17 00:41:54.058899 | orchestrator | skipping: [testbed-node-0] 2025-04-17 00:41:54.140643 | orchestrator | skipping: [testbed-node-1] 2025-04-17 00:41:54.217180 | orchestrator | skipping: [testbed-node-2] 2025-04-17 00:41:54.300586 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:41:54.334653 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:41:54.335260 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:41:54.335300 | orchestrator | 2025-04-17 00:41:54.336182 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:41:54.337310 | orchestrator | 2025-04-17 00:41:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:41:54.338177 | orchestrator | 2025-04-17 00:41:54 | INFO  | Please wait and do not abort execution. 2025-04-17 00:41:54.338230 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:54.339135 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:54.339661 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:54.340276 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:54.340750 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:54.341222 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:54.341684 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 00:41:54.342150 | orchestrator | 2025-04-17 00:41:54.342487 | orchestrator | Thursday 17 April 2025 00:41:54 +0000 (0:00:00.830) 0:00:07.706 ******** 2025-04-17 00:41:54.342845 | orchestrator | =============================================================================== 2025-04-17 00:41:54.343259 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.56s 2025-04-17 00:41:54.343588 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2025-04-17 00:41:54.343960 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-04-17 00:41:54.344463 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.83s 2025-04-17 00:41:56.390221 | orchestrator | 2025-04-17 00:41:56 | INFO  | Task 50a667b4-c58a-4d48-bded-7dee2582dee8 (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-17 00:41:59.567048 | orchestrator | 2025-04-17 00:41:56 | INFO  | It takes a moment until task 50a667b4-c58a-4d48-bded-7dee2582dee8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-17 00:41:59.567188 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-17 00:42:00.198753 | orchestrator | 2025-04-17 00:42:00.199195 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-17 00:42:00.199875 | orchestrator | 2025-04-17 00:42:00.199913 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-17 00:42:00.200220 | orchestrator | Thursday 17 April 2025 00:42:00 +0000 (0:00:00.544) 0:00:00.544 ******** 2025-04-17 00:42:00.528387 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-17 00:42:00.530347 | orchestrator | 2025-04-17 00:42:00.530425 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-17 00:42:00.531018 | orchestrator | Thursday 17 April 2025 00:42:00 +0000 (0:00:00.329) 0:00:00.873 ******** 2025-04-17 00:42:00.750935 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:42:00.751947 | orchestrator | 2025-04-17 00:42:00.753705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:00.753807 | orchestrator | Thursday 17 April 2025 00:42:00 +0000 (0:00:00.224) 0:00:01.098 ******** 2025-04-17 00:42:01.398615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-17 00:42:01.398828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-17 00:42:01.399419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-17 00:42:01.400616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-17 00:42:01.401026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-17 00:42:01.401061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-17 00:42:01.401767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-17 00:42:01.402159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-17 00:42:01.402648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-17 00:42:01.403750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-17 00:42:01.404083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-17 00:42:01.404309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-17 00:42:01.404632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-17 00:42:01.405156 | orchestrator | 2025-04-17 00:42:01.405425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:01.405455 | orchestrator | Thursday 17 April 2025 00:42:01 +0000 (0:00:00.646) 0:00:01.744 ******** 2025-04-17 00:42:01.652761 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:01.653431 | orchestrator | 2025-04-17 00:42:01.654252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:01.654299 | orchestrator | Thursday 17 April 2025 00:42:01 +0000 (0:00:00.255) 0:00:02.000 ******** 2025-04-17 00:42:01.878115 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:01.878315 | orchestrator | 2025-04-17 00:42:01.878344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:01.879423 | orchestrator | Thursday 17 April 2025 00:42:01 +0000 (0:00:00.224) 0:00:02.224 ******** 2025-04-17 00:42:02.103394 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:02.104217 | orchestrator | 2025-04-17 00:42:02.104627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:02.106127 | orchestrator | Thursday 17 April 2025 00:42:02 +0000 (0:00:00.225) 0:00:02.450 ******** 2025-04-17 00:42:02.342360 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:02.342529 | orchestrator | 2025-04-17 00:42:02.342946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:02.343454 | orchestrator | Thursday 17 April 2025 00:42:02 +0000 (0:00:00.239) 0:00:02.689 ******** 2025-04-17 00:42:02.527631 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:02.527827 | orchestrator | 2025-04-17 00:42:02.527857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:02.528107 | orchestrator | Thursday 17 April 2025 00:42:02 +0000 (0:00:00.185) 0:00:02.875 ******** 2025-04-17 00:42:02.717057 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:02.720124 | orchestrator | 2025-04-17 00:42:02.720250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:02.720535 | orchestrator | Thursday 17 April 2025 00:42:02 +0000 (0:00:00.190) 0:00:03.065 ******** 2025-04-17 00:42:02.862665 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:02.862802 | orchestrator | 2025-04-17 00:42:02.863201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:02.863387 | orchestrator | Thursday 17 April 2025 00:42:02 +0000 (0:00:00.146) 0:00:03.211 ******** 2025-04-17 00:42:03.008878 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:03.009431 | orchestrator | 2025-04-17 00:42:03.465692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:03.465832 | orchestrator | Thursday 17 April 2025 00:42:03 +0000 (0:00:00.143) 0:00:03.355 ******** 2025-04-17 00:42:03.465863 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6) 2025-04-17 00:42:03.465958 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6) 2025-04-17 00:42:03.466593 | orchestrator | 2025-04-17 00:42:03.466669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:03.467091 | orchestrator | Thursday 17 April 2025 00:42:03 +0000 (0:00:00.457) 0:00:03.812 ******** 2025-04-17 00:42:04.121635 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0) 2025-04-17 00:42:04.124972 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0) 2025-04-17 00:42:04.127295 | orchestrator | 2025-04-17 00:42:04.127648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:04.129092 | orchestrator | Thursday 17 April 2025 00:42:04 +0000 (0:00:00.657) 0:00:04.470 ******** 2025-04-17 00:42:04.510822 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb) 2025-04-17 00:42:04.514765 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb) 2025-04-17 00:42:04.514813 | orchestrator | 2025-04-17 00:42:04.514840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:04.515489 | orchestrator | Thursday 17 April 2025 00:42:04 +0000 (0:00:00.384) 0:00:04.854 ******** 2025-04-17 00:42:04.938190 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242) 2025-04-17 00:42:04.938363 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242) 2025-04-17 00:42:04.939913 | orchestrator | 2025-04-17 00:42:04.941348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:04.941902 | orchestrator | Thursday 17 April 2025 00:42:04 +0000 (0:00:00.429) 0:00:05.283 ******** 2025-04-17 00:42:05.239420 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-17 00:42:05.239876 | orchestrator | 2025-04-17 00:42:05.239908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:05.239927 | orchestrator | Thursday 17 April 2025 00:42:05 +0000 (0:00:00.302) 0:00:05.586 ******** 2025-04-17 00:42:05.710722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-17 00:42:05.711207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-17 00:42:05.711321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-17 00:42:05.712797 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-17 00:42:05.713156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-17 00:42:05.715721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-17 00:42:05.716130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-17 00:42:05.716939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-17 00:42:05.717512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-17 00:42:05.718112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-17 00:42:05.718526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-17 00:42:05.719061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-17 00:42:05.719722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-17 00:42:05.720490 | orchestrator | 2025-04-17 00:42:05.721062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:05.721591 | orchestrator | Thursday 17 April 2025 00:42:05 +0000 (0:00:00.467) 0:00:06.053 ******** 2025-04-17 00:42:05.945249 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:05.946127 | orchestrator | 2025-04-17 00:42:05.946914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:05.947605 | orchestrator | Thursday 17 April 2025 00:42:05 +0000 (0:00:00.236) 0:00:06.290 ******** 2025-04-17 00:42:06.140796 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:06.143022 | orchestrator | 2025-04-17 00:42:06.143089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:06.144718 | orchestrator | Thursday 17 April 2025 00:42:06 +0000 (0:00:00.198) 0:00:06.489 ******** 2025-04-17 00:42:06.355967 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:06.356861 | orchestrator | 2025-04-17 00:42:06.356921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:06.357099 | orchestrator | Thursday 17 April 2025 00:42:06 +0000 (0:00:00.210) 0:00:06.700 ******** 2025-04-17 00:42:06.629733 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:06.630068 | orchestrator | 2025-04-17 00:42:06.630479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:06.631591 | orchestrator | Thursday 17 April 2025 00:42:06 +0000 (0:00:00.274) 0:00:06.974 ******** 2025-04-17 00:42:06.874286 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:06.874477 | orchestrator | 2025-04-17 00:42:06.874537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:06.876947 | orchestrator | Thursday 17 April 2025 00:42:06 +0000 (0:00:00.247) 0:00:07.222 ******** 2025-04-17 00:42:07.435445 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:07.436186 | orchestrator | 2025-04-17 00:42:07.436243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:07.438361 | orchestrator | Thursday 17 April 2025 00:42:07 +0000 (0:00:00.559) 0:00:07.781 ******** 2025-04-17 00:42:07.723419 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:07.723660 | orchestrator | 2025-04-17 00:42:07.725406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:07.726768 | orchestrator | Thursday 17 April 2025 00:42:07 +0000 (0:00:00.283) 0:00:08.065 ******** 2025-04-17 00:42:07.935518 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:07.935918 | orchestrator | 2025-04-17 00:42:07.936690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:07.937565 | orchestrator | Thursday 17 April 2025 00:42:07 +0000 (0:00:00.216) 0:00:08.281 ******** 2025-04-17 00:42:08.635079 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-17 00:42:08.635661 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-17 00:42:08.638331 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-17 00:42:08.638422 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-17 00:42:08.638445 | orchestrator | 2025-04-17 00:42:08.639084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:08.639658 | orchestrator | Thursday 17 April 2025 00:42:08 +0000 (0:00:00.698) 0:00:08.979 ******** 2025-04-17 00:42:08.853906 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:08.893449 | orchestrator | 2025-04-17 00:42:09.053683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:09.053804 | orchestrator | Thursday 17 April 2025 00:42:08 +0000 (0:00:00.211) 0:00:09.191 ******** 2025-04-17 00:42:09.053839 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:09.055058 | orchestrator | 2025-04-17 00:42:09.055277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:09.056603 | orchestrator | Thursday 17 April 2025 00:42:09 +0000 (0:00:00.210) 0:00:09.401 ******** 2025-04-17 00:42:09.273162 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:09.273364 | orchestrator | 2025-04-17 00:42:09.274232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:09.274868 | orchestrator | Thursday 17 April 2025 00:42:09 +0000 (0:00:00.213) 0:00:09.614 ******** 2025-04-17 00:42:09.507601 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:09.508171 | orchestrator | 2025-04-17 00:42:09.508586 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-17 00:42:09.511666 | orchestrator | Thursday 17 April 2025 00:42:09 +0000 (0:00:00.237) 0:00:09.851 ******** 2025-04-17 00:42:09.727884 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-17 00:42:09.728495 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-17 00:42:09.728629 | orchestrator | 2025-04-17 00:42:09.730702 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-17 00:42:09.731174 | orchestrator | Thursday 17 April 2025 00:42:09 +0000 (0:00:00.218) 0:00:10.070 ******** 2025-04-17 00:42:09.917078 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:09.917887 | orchestrator | 2025-04-17 00:42:09.919421 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-17 00:42:09.920305 | orchestrator | Thursday 17 April 2025 00:42:09 +0000 (0:00:00.189) 0:00:10.260 ******** 2025-04-17 00:42:10.256180 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:10.378673 | orchestrator | 2025-04-17 00:42:10.378800 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-17 00:42:10.378819 | orchestrator | Thursday 17 April 2025 00:42:10 +0000 (0:00:00.341) 0:00:10.601 ******** 2025-04-17 00:42:10.378881 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:10.380089 | orchestrator | 2025-04-17 00:42:10.380321 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-17 00:42:10.385102 | orchestrator | Thursday 17 April 2025 00:42:10 +0000 (0:00:00.124) 0:00:10.726 ******** 2025-04-17 00:42:10.532325 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:42:10.539363 | orchestrator | 2025-04-17 00:42:10.541033 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-17 00:42:10.735454 | orchestrator | Thursday 17 April 2025 00:42:10 +0000 (0:00:00.151) 0:00:10.877 ******** 2025-04-17 00:42:10.735589 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '567181ad-d304-5248-b248-9710ecf6a56a'}}) 2025-04-17 00:42:10.736138 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}}) 2025-04-17 00:42:10.736168 | orchestrator | 2025-04-17 00:42:10.736193 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-17 00:42:10.736754 | orchestrator | Thursday 17 April 2025 00:42:10 +0000 (0:00:00.205) 0:00:11.082 ******** 2025-04-17 00:42:10.902839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '567181ad-d304-5248-b248-9710ecf6a56a'}})  2025-04-17 00:42:10.905276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}})  2025-04-17 00:42:10.905730 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:10.906319 | orchestrator | 2025-04-17 00:42:10.906960 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-17 00:42:10.907309 | orchestrator | Thursday 17 April 2025 00:42:10 +0000 (0:00:00.164) 0:00:11.247 ******** 2025-04-17 00:42:11.071339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '567181ad-d304-5248-b248-9710ecf6a56a'}})  2025-04-17 00:42:11.072314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}})  2025-04-17 00:42:11.074778 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:11.075306 | orchestrator | 2025-04-17 00:42:11.075338 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-17 00:42:11.075361 | orchestrator | Thursday 17 April 2025 00:42:11 +0000 (0:00:00.170) 0:00:11.418 ******** 2025-04-17 00:42:11.235618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '567181ad-d304-5248-b248-9710ecf6a56a'}})  2025-04-17 00:42:11.238642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}})  2025-04-17 00:42:11.239732 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:11.241059 | orchestrator | 2025-04-17 00:42:11.241843 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-17 00:42:11.242550 | orchestrator | Thursday 17 April 2025 00:42:11 +0000 (0:00:00.158) 0:00:11.576 ******** 2025-04-17 00:42:11.366502 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:42:11.367651 | orchestrator | 2025-04-17 00:42:11.370194 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-17 00:42:11.512217 | orchestrator | Thursday 17 April 2025 00:42:11 +0000 (0:00:00.137) 0:00:11.713 ******** 2025-04-17 00:42:11.512351 | orchestrator | ok: [testbed-node-3] 2025-04-17 00:42:11.512627 | orchestrator | 2025-04-17 00:42:11.513747 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-17 00:42:11.514548 | orchestrator | Thursday 17 April 2025 00:42:11 +0000 (0:00:00.145) 0:00:11.858 ******** 2025-04-17 00:42:11.663813 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:11.664474 | orchestrator | 2025-04-17 00:42:11.668714 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-17 00:42:11.669809 | orchestrator | Thursday 17 April 2025 00:42:11 +0000 (0:00:00.152) 0:00:12.010 ******** 2025-04-17 00:42:11.819246 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:11.820906 | orchestrator | 2025-04-17 00:42:11.824016 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-17 00:42:11.824192 | orchestrator | Thursday 17 April 2025 00:42:11 +0000 (0:00:00.155) 0:00:12.165 ******** 2025-04-17 00:42:11.950847 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:11.951146 | orchestrator | 2025-04-17 00:42:11.953173 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-17 00:42:11.956317 | orchestrator | Thursday 17 April 2025 00:42:11 +0000 (0:00:00.130) 0:00:12.295 ******** 2025-04-17 00:42:12.306293 | orchestrator | ok: [testbed-node-3] => { 2025-04-17 00:42:12.306528 | orchestrator |  "ceph_osd_devices": { 2025-04-17 00:42:12.306900 | orchestrator |  "sdb": { 2025-04-17 00:42:12.307512 | orchestrator |  "osd_lvm_uuid": "567181ad-d304-5248-b248-9710ecf6a56a" 2025-04-17 00:42:12.308744 | orchestrator |  }, 2025-04-17 00:42:12.312374 | orchestrator |  "sdc": { 2025-04-17 00:42:12.312593 | orchestrator |  "osd_lvm_uuid": "6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e" 2025-04-17 00:42:12.313026 | orchestrator |  } 2025-04-17 00:42:12.313570 | orchestrator |  } 2025-04-17 00:42:12.313845 | orchestrator | } 2025-04-17 00:42:12.314247 | orchestrator | 2025-04-17 00:42:12.314821 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-17 00:42:12.315183 | orchestrator | Thursday 17 April 2025 00:42:12 +0000 (0:00:00.356) 0:00:12.652 ******** 2025-04-17 00:42:12.455861 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:12.457434 | orchestrator | 2025-04-17 00:42:12.458455 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-17 00:42:12.458661 | orchestrator | Thursday 17 April 2025 00:42:12 +0000 (0:00:00.150) 0:00:12.803 ******** 2025-04-17 00:42:12.586484 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:12.587963 | orchestrator | 2025-04-17 00:42:12.589588 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-17 00:42:12.590945 | orchestrator | Thursday 17 April 2025 00:42:12 +0000 (0:00:00.130) 0:00:12.933 ******** 2025-04-17 00:42:12.721017 | orchestrator | skipping: [testbed-node-3] 2025-04-17 00:42:12.723305 | orchestrator | 2025-04-17 00:42:12.726945 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-17 00:42:12.727589 | orchestrator | Thursday 17 April 2025 00:42:12 +0000 (0:00:00.133) 0:00:13.067 ******** 2025-04-17 00:42:13.002322 | orchestrator | changed: [testbed-node-3] => { 2025-04-17 00:42:13.003494 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-17 00:42:13.003543 | orchestrator |  "ceph_osd_devices": { 2025-04-17 00:42:13.004523 | orchestrator |  "sdb": { 2025-04-17 00:42:13.007803 | orchestrator |  "osd_lvm_uuid": "567181ad-d304-5248-b248-9710ecf6a56a" 2025-04-17 00:42:13.008194 | orchestrator |  }, 2025-04-17 00:42:13.008836 | orchestrator |  "sdc": { 2025-04-17 00:42:13.009245 | orchestrator |  "osd_lvm_uuid": "6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e" 2025-04-17 00:42:13.011177 | orchestrator |  } 2025-04-17 00:42:13.011317 | orchestrator |  }, 2025-04-17 00:42:13.011787 | orchestrator |  "lvm_volumes": [ 2025-04-17 00:42:13.012795 | orchestrator |  { 2025-04-17 00:42:13.013032 | orchestrator |  "data": "osd-block-567181ad-d304-5248-b248-9710ecf6a56a", 2025-04-17 00:42:13.013402 | orchestrator |  "data_vg": "ceph-567181ad-d304-5248-b248-9710ecf6a56a" 2025-04-17 00:42:13.013775 | orchestrator |  }, 2025-04-17 00:42:13.014276 | orchestrator |  { 2025-04-17 00:42:13.016124 | orchestrator |  "data": "osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e", 2025-04-17 00:42:13.019353 | orchestrator |  "data_vg": "ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e" 2025-04-17 00:42:13.019565 | orchestrator |  } 2025-04-17 00:42:13.021262 | orchestrator |  ] 2025-04-17 00:42:13.021595 | orchestrator |  } 2025-04-17 00:42:13.022102 | orchestrator | } 2025-04-17 00:42:13.022768 | orchestrator | 2025-04-17 00:42:13.023108 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-17 00:42:13.023603 | orchestrator | Thursday 17 April 2025 00:42:12 +0000 (0:00:00.280) 0:00:13.347 ******** 2025-04-17 00:42:15.121262 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-17 00:42:15.122180 | orchestrator | 2025-04-17 00:42:15.122233 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-17 00:42:15.125473 | orchestrator | 2025-04-17 00:42:15.125745 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-17 00:42:15.126295 | orchestrator | Thursday 17 April 2025 00:42:15 +0000 (0:00:02.119) 0:00:15.467 ******** 2025-04-17 00:42:15.373166 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-17 00:42:15.374673 | orchestrator | 2025-04-17 00:42:15.377129 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-17 00:42:15.377596 | orchestrator | Thursday 17 April 2025 00:42:15 +0000 (0:00:00.250) 0:00:15.717 ******** 2025-04-17 00:42:15.610909 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:42:15.611132 | orchestrator | 2025-04-17 00:42:15.611301 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:15.612811 | orchestrator | Thursday 17 April 2025 00:42:15 +0000 (0:00:00.240) 0:00:15.958 ******** 2025-04-17 00:42:16.013264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-17 00:42:16.013806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-17 00:42:16.016466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-17 00:42:16.017136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-17 00:42:16.017883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-17 00:42:16.020304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-17 00:42:16.020709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-17 00:42:16.021124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-17 00:42:16.021632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-17 00:42:16.022167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-17 00:42:16.023174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-17 00:42:16.024022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-17 00:42:16.024324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-17 00:42:16.025318 | orchestrator | 2025-04-17 00:42:16.027630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:16.028026 | orchestrator | Thursday 17 April 2025 00:42:16 +0000 (0:00:00.401) 0:00:16.359 ******** 2025-04-17 00:42:16.215854 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:16.216498 | orchestrator | 2025-04-17 00:42:16.216593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:16.217266 | orchestrator | Thursday 17 April 2025 00:42:16 +0000 (0:00:00.200) 0:00:16.560 ******** 2025-04-17 00:42:16.413194 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:16.414488 | orchestrator | 2025-04-17 00:42:16.416677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:16.421388 | orchestrator | Thursday 17 April 2025 00:42:16 +0000 (0:00:00.198) 0:00:16.759 ******** 2025-04-17 00:42:16.642340 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:16.643771 | orchestrator | 2025-04-17 00:42:16.644103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:16.644831 | orchestrator | Thursday 17 April 2025 00:42:16 +0000 (0:00:00.225) 0:00:16.985 ******** 2025-04-17 00:42:16.835707 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:16.836215 | orchestrator | 2025-04-17 00:42:16.836931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:16.840232 | orchestrator | Thursday 17 April 2025 00:42:16 +0000 (0:00:00.196) 0:00:17.181 ******** 2025-04-17 00:42:17.391887 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:17.392636 | orchestrator | 2025-04-17 00:42:17.393790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:17.397224 | orchestrator | Thursday 17 April 2025 00:42:17 +0000 (0:00:00.556) 0:00:17.738 ******** 2025-04-17 00:42:17.656544 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:17.657160 | orchestrator | 2025-04-17 00:42:17.658232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:17.660743 | orchestrator | Thursday 17 April 2025 00:42:17 +0000 (0:00:00.263) 0:00:18.002 ******** 2025-04-17 00:42:17.873952 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:17.874632 | orchestrator | 2025-04-17 00:42:17.874691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:17.875718 | orchestrator | Thursday 17 April 2025 00:42:17 +0000 (0:00:00.215) 0:00:18.218 ******** 2025-04-17 00:42:18.132574 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:18.132861 | orchestrator | 2025-04-17 00:42:18.132896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:18.565799 | orchestrator | Thursday 17 April 2025 00:42:18 +0000 (0:00:00.258) 0:00:18.477 ******** 2025-04-17 00:42:18.566089 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a) 2025-04-17 00:42:18.566193 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a) 2025-04-17 00:42:18.566677 | orchestrator | 2025-04-17 00:42:18.568366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:18.569047 | orchestrator | Thursday 17 April 2025 00:42:18 +0000 (0:00:00.434) 0:00:18.912 ******** 2025-04-17 00:42:19.000515 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908) 2025-04-17 00:42:19.001143 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908) 2025-04-17 00:42:19.002153 | orchestrator | 2025-04-17 00:42:19.002517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:19.006098 | orchestrator | Thursday 17 April 2025 00:42:18 +0000 (0:00:00.435) 0:00:19.347 ******** 2025-04-17 00:42:19.414236 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89) 2025-04-17 00:42:19.416727 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89) 2025-04-17 00:42:19.419450 | orchestrator | 2025-04-17 00:42:19.419490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:19.420256 | orchestrator | Thursday 17 April 2025 00:42:19 +0000 (0:00:00.414) 0:00:19.761 ******** 2025-04-17 00:42:19.859456 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb) 2025-04-17 00:42:19.861912 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb) 2025-04-17 00:42:19.862537 | orchestrator | 2025-04-17 00:42:19.862768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:19.864746 | orchestrator | Thursday 17 April 2025 00:42:19 +0000 (0:00:00.443) 0:00:20.205 ******** 2025-04-17 00:42:20.191352 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-17 00:42:20.191843 | orchestrator | 2025-04-17 00:42:20.192634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:20.196271 | orchestrator | Thursday 17 April 2025 00:42:20 +0000 (0:00:00.332) 0:00:20.537 ******** 2025-04-17 00:42:20.795221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-17 00:42:20.796560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-17 00:42:20.796598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-17 00:42:20.796621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-17 00:42:20.796676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-17 00:42:20.797534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-17 00:42:20.797914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-17 00:42:20.798167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-17 00:42:20.798746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-17 00:42:20.799106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-17 00:42:20.799260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-17 00:42:20.799602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-17 00:42:20.799844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-17 00:42:20.800455 | orchestrator | 2025-04-17 00:42:20.800786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:20.801147 | orchestrator | Thursday 17 April 2025 00:42:20 +0000 (0:00:00.598) 0:00:21.136 ******** 2025-04-17 00:42:20.982611 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:20.982809 | orchestrator | 2025-04-17 00:42:20.983870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:20.984720 | orchestrator | Thursday 17 April 2025 00:42:20 +0000 (0:00:00.194) 0:00:21.330 ******** 2025-04-17 00:42:21.204321 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:21.205137 | orchestrator | 2025-04-17 00:42:21.207814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:21.208748 | orchestrator | Thursday 17 April 2025 00:42:21 +0000 (0:00:00.220) 0:00:21.550 ******** 2025-04-17 00:42:21.424703 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:21.426310 | orchestrator | 2025-04-17 00:42:21.428494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:21.432042 | orchestrator | Thursday 17 April 2025 00:42:21 +0000 (0:00:00.216) 0:00:21.767 ******** 2025-04-17 00:42:21.668582 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:21.668761 | orchestrator | 2025-04-17 00:42:21.669398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:21.670306 | orchestrator | Thursday 17 April 2025 00:42:21 +0000 (0:00:00.247) 0:00:22.014 ******** 2025-04-17 00:42:21.885592 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:21.885772 | orchestrator | 2025-04-17 00:42:21.886784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:21.887682 | orchestrator | Thursday 17 April 2025 00:42:21 +0000 (0:00:00.217) 0:00:22.232 ******** 2025-04-17 00:42:22.084983 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:22.085310 | orchestrator | 2025-04-17 00:42:22.086468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:22.087725 | orchestrator | Thursday 17 April 2025 00:42:22 +0000 (0:00:00.199) 0:00:22.432 ******** 2025-04-17 00:42:22.286352 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:22.287405 | orchestrator | 2025-04-17 00:42:22.288321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:22.289258 | orchestrator | Thursday 17 April 2025 00:42:22 +0000 (0:00:00.200) 0:00:22.632 ******** 2025-04-17 00:42:22.486389 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:22.486864 | orchestrator | 2025-04-17 00:42:22.487668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:22.488493 | orchestrator | Thursday 17 April 2025 00:42:22 +0000 (0:00:00.198) 0:00:22.831 ******** 2025-04-17 00:42:23.301376 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-17 00:42:23.301891 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-17 00:42:23.303513 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-17 00:42:23.304919 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-17 00:42:23.305627 | orchestrator | 2025-04-17 00:42:23.306813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:23.308361 | orchestrator | Thursday 17 April 2025 00:42:23 +0000 (0:00:00.814) 0:00:23.646 ******** 2025-04-17 00:42:23.887747 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:23.888189 | orchestrator | 2025-04-17 00:42:23.889262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:23.889956 | orchestrator | Thursday 17 April 2025 00:42:23 +0000 (0:00:00.587) 0:00:24.234 ******** 2025-04-17 00:42:24.089488 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:24.090302 | orchestrator | 2025-04-17 00:42:24.091823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:24.093097 | orchestrator | Thursday 17 April 2025 00:42:24 +0000 (0:00:00.201) 0:00:24.436 ******** 2025-04-17 00:42:24.305898 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:24.306170 | orchestrator | 2025-04-17 00:42:24.307847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:24.308700 | orchestrator | Thursday 17 April 2025 00:42:24 +0000 (0:00:00.217) 0:00:24.653 ******** 2025-04-17 00:42:24.507930 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:24.509034 | orchestrator | 2025-04-17 00:42:24.510442 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-17 00:42:24.510829 | orchestrator | Thursday 17 April 2025 00:42:24 +0000 (0:00:00.201) 0:00:24.854 ******** 2025-04-17 00:42:24.688515 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-17 00:42:24.688735 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-17 00:42:24.688768 | orchestrator | 2025-04-17 00:42:24.689308 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-17 00:42:24.691067 | orchestrator | Thursday 17 April 2025 00:42:24 +0000 (0:00:00.179) 0:00:25.034 ******** 2025-04-17 00:42:24.822770 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:24.823921 | orchestrator | 2025-04-17 00:42:24.824519 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-17 00:42:24.828230 | orchestrator | Thursday 17 April 2025 00:42:24 +0000 (0:00:00.135) 0:00:25.169 ******** 2025-04-17 00:42:24.966181 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:24.966845 | orchestrator | 2025-04-17 00:42:24.967384 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-17 00:42:24.971388 | orchestrator | Thursday 17 April 2025 00:42:24 +0000 (0:00:00.142) 0:00:25.312 ******** 2025-04-17 00:42:25.103752 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:25.104218 | orchestrator | 2025-04-17 00:42:25.104913 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-17 00:42:25.107050 | orchestrator | Thursday 17 April 2025 00:42:25 +0000 (0:00:00.137) 0:00:25.450 ******** 2025-04-17 00:42:25.243804 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:42:25.244530 | orchestrator | 2025-04-17 00:42:25.245352 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-17 00:42:25.246176 | orchestrator | Thursday 17 April 2025 00:42:25 +0000 (0:00:00.141) 0:00:25.591 ******** 2025-04-17 00:42:25.417377 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ebc25b0-9278-5fc8-8be4-afb201f0a343'}}) 2025-04-17 00:42:25.418134 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b69f2859-f86c-57c9-a956-28222694e166'}}) 2025-04-17 00:42:25.418939 | orchestrator | 2025-04-17 00:42:25.419914 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-17 00:42:25.420616 | orchestrator | Thursday 17 April 2025 00:42:25 +0000 (0:00:00.173) 0:00:25.764 ******** 2025-04-17 00:42:25.568188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ebc25b0-9278-5fc8-8be4-afb201f0a343'}})  2025-04-17 00:42:25.568435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b69f2859-f86c-57c9-a956-28222694e166'}})  2025-04-17 00:42:25.569386 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:25.569590 | orchestrator | 2025-04-17 00:42:25.570614 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-17 00:42:25.570722 | orchestrator | Thursday 17 April 2025 00:42:25 +0000 (0:00:00.150) 0:00:25.915 ******** 2025-04-17 00:42:25.744090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ebc25b0-9278-5fc8-8be4-afb201f0a343'}})  2025-04-17 00:42:25.744263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b69f2859-f86c-57c9-a956-28222694e166'}})  2025-04-17 00:42:25.744290 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:25.744971 | orchestrator | 2025-04-17 00:42:25.745424 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-17 00:42:25.747981 | orchestrator | Thursday 17 April 2025 00:42:25 +0000 (0:00:00.174) 0:00:26.090 ******** 2025-04-17 00:42:26.050112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ebc25b0-9278-5fc8-8be4-afb201f0a343'}})  2025-04-17 00:42:26.050349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b69f2859-f86c-57c9-a956-28222694e166'}})  2025-04-17 00:42:26.052263 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:26.052654 | orchestrator | 2025-04-17 00:42:26.053837 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-17 00:42:26.054203 | orchestrator | Thursday 17 April 2025 00:42:26 +0000 (0:00:00.306) 0:00:26.396 ******** 2025-04-17 00:42:26.203821 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:42:26.204192 | orchestrator | 2025-04-17 00:42:26.204605 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-17 00:42:26.205748 | orchestrator | Thursday 17 April 2025 00:42:26 +0000 (0:00:00.154) 0:00:26.550 ******** 2025-04-17 00:42:26.346597 | orchestrator | ok: [testbed-node-4] 2025-04-17 00:42:26.346878 | orchestrator | 2025-04-17 00:42:26.346926 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-17 00:42:26.348612 | orchestrator | Thursday 17 April 2025 00:42:26 +0000 (0:00:00.142) 0:00:26.693 ******** 2025-04-17 00:42:26.497653 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:26.498340 | orchestrator | 2025-04-17 00:42:26.498378 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-17 00:42:26.499330 | orchestrator | Thursday 17 April 2025 00:42:26 +0000 (0:00:00.150) 0:00:26.843 ******** 2025-04-17 00:42:26.635166 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:26.635975 | orchestrator | 2025-04-17 00:42:26.636656 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-17 00:42:26.637417 | orchestrator | Thursday 17 April 2025 00:42:26 +0000 (0:00:00.138) 0:00:26.982 ******** 2025-04-17 00:42:26.773561 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:26.774356 | orchestrator | 2025-04-17 00:42:26.777479 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-17 00:42:26.914752 | orchestrator | Thursday 17 April 2025 00:42:26 +0000 (0:00:00.137) 0:00:27.119 ******** 2025-04-17 00:42:26.914925 | orchestrator | ok: [testbed-node-4] => { 2025-04-17 00:42:26.915077 | orchestrator |  "ceph_osd_devices": { 2025-04-17 00:42:26.915676 | orchestrator |  "sdb": { 2025-04-17 00:42:26.916035 | orchestrator |  "osd_lvm_uuid": "7ebc25b0-9278-5fc8-8be4-afb201f0a343" 2025-04-17 00:42:26.916468 | orchestrator |  }, 2025-04-17 00:42:26.916907 | orchestrator |  "sdc": { 2025-04-17 00:42:26.918309 | orchestrator |  "osd_lvm_uuid": "b69f2859-f86c-57c9-a956-28222694e166" 2025-04-17 00:42:26.920205 | orchestrator |  } 2025-04-17 00:42:26.920252 | orchestrator |  } 2025-04-17 00:42:27.053316 | orchestrator | } 2025-04-17 00:42:27.053463 | orchestrator | 2025-04-17 00:42:27.053481 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-17 00:42:27.053498 | orchestrator | Thursday 17 April 2025 00:42:26 +0000 (0:00:00.142) 0:00:27.262 ******** 2025-04-17 00:42:27.053530 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:27.053881 | orchestrator | 2025-04-17 00:42:27.054907 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-17 00:42:27.055466 | orchestrator | Thursday 17 April 2025 00:42:27 +0000 (0:00:00.137) 0:00:27.400 ******** 2025-04-17 00:42:27.196132 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:27.196700 | orchestrator | 2025-04-17 00:42:27.344443 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-17 00:42:27.344593 | orchestrator | Thursday 17 April 2025 00:42:27 +0000 (0:00:00.142) 0:00:27.542 ******** 2025-04-17 00:42:27.344631 | orchestrator | skipping: [testbed-node-4] 2025-04-17 00:42:27.345649 | orchestrator | 2025-04-17 00:42:27.346888 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-17 00:42:27.347454 | orchestrator | Thursday 17 April 2025 00:42:27 +0000 (0:00:00.145) 0:00:27.688 ******** 2025-04-17 00:42:27.774636 | orchestrator | changed: [testbed-node-4] => { 2025-04-17 00:42:27.775224 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-17 00:42:27.775541 | orchestrator |  "ceph_osd_devices": { 2025-04-17 00:42:27.777153 | orchestrator |  "sdb": { 2025-04-17 00:42:27.779322 | orchestrator |  "osd_lvm_uuid": "7ebc25b0-9278-5fc8-8be4-afb201f0a343" 2025-04-17 00:42:27.779608 | orchestrator |  }, 2025-04-17 00:42:27.780824 | orchestrator |  "sdc": { 2025-04-17 00:42:27.781638 | orchestrator |  "osd_lvm_uuid": "b69f2859-f86c-57c9-a956-28222694e166" 2025-04-17 00:42:27.782564 | orchestrator |  } 2025-04-17 00:42:27.783259 | orchestrator |  }, 2025-04-17 00:42:27.783649 | orchestrator |  "lvm_volumes": [ 2025-04-17 00:42:27.784336 | orchestrator |  { 2025-04-17 00:42:27.784716 | orchestrator |  "data": "osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343", 2025-04-17 00:42:27.785229 | orchestrator |  "data_vg": "ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343" 2025-04-17 00:42:27.786171 | orchestrator |  }, 2025-04-17 00:42:27.786762 | orchestrator |  { 2025-04-17 00:42:27.787139 | orchestrator |  "data": "osd-block-b69f2859-f86c-57c9-a956-28222694e166", 2025-04-17 00:42:27.787873 | orchestrator |  "data_vg": "ceph-b69f2859-f86c-57c9-a956-28222694e166" 2025-04-17 00:42:27.788146 | orchestrator |  } 2025-04-17 00:42:27.788798 | orchestrator |  ] 2025-04-17 00:42:27.789018 | orchestrator |  } 2025-04-17 00:42:27.789439 | orchestrator | } 2025-04-17 00:42:27.789813 | orchestrator | 2025-04-17 00:42:27.790523 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-17 00:42:27.791077 | orchestrator | Thursday 17 April 2025 00:42:27 +0000 (0:00:00.431) 0:00:28.119 ******** 2025-04-17 00:42:29.122263 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-17 00:42:29.122966 | orchestrator | 2025-04-17 00:42:29.124098 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-17 00:42:29.124734 | orchestrator | 2025-04-17 00:42:29.125439 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-17 00:42:29.126102 | orchestrator | Thursday 17 April 2025 00:42:29 +0000 (0:00:01.346) 0:00:29.466 ******** 2025-04-17 00:42:29.353957 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-17 00:42:29.354923 | orchestrator | 2025-04-17 00:42:29.355793 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-17 00:42:29.356409 | orchestrator | Thursday 17 April 2025 00:42:29 +0000 (0:00:00.234) 0:00:29.701 ******** 2025-04-17 00:42:29.584820 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:42:29.586220 | orchestrator | 2025-04-17 00:42:29.586279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:29.588816 | orchestrator | Thursday 17 April 2025 00:42:29 +0000 (0:00:00.229) 0:00:29.930 ******** 2025-04-17 00:42:30.280682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-17 00:42:30.281554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-17 00:42:30.283035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-17 00:42:30.286536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-17 00:42:30.287153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-17 00:42:30.287887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-17 00:42:30.288524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-17 00:42:30.289437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-17 00:42:30.289574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-17 00:42:30.290233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-17 00:42:30.291361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-17 00:42:30.291516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-17 00:42:30.291781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-17 00:42:30.292529 | orchestrator | 2025-04-17 00:42:30.292811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:30.293199 | orchestrator | Thursday 17 April 2025 00:42:30 +0000 (0:00:00.696) 0:00:30.626 ******** 2025-04-17 00:42:30.478253 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:30.480421 | orchestrator | 2025-04-17 00:42:30.481235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:30.482048 | orchestrator | Thursday 17 April 2025 00:42:30 +0000 (0:00:00.195) 0:00:30.822 ******** 2025-04-17 00:42:30.674571 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:30.676224 | orchestrator | 2025-04-17 00:42:30.676883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:30.677720 | orchestrator | Thursday 17 April 2025 00:42:30 +0000 (0:00:00.199) 0:00:31.021 ******** 2025-04-17 00:42:30.873390 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:30.873944 | orchestrator | 2025-04-17 00:42:30.874798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:30.875947 | orchestrator | Thursday 17 April 2025 00:42:30 +0000 (0:00:00.197) 0:00:31.219 ******** 2025-04-17 00:42:31.074560 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:31.076112 | orchestrator | 2025-04-17 00:42:31.077102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:31.077946 | orchestrator | Thursday 17 April 2025 00:42:31 +0000 (0:00:00.202) 0:00:31.421 ******** 2025-04-17 00:42:31.281747 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:31.282920 | orchestrator | 2025-04-17 00:42:31.283443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:31.284579 | orchestrator | Thursday 17 April 2025 00:42:31 +0000 (0:00:00.206) 0:00:31.628 ******** 2025-04-17 00:42:31.481732 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:31.482327 | orchestrator | 2025-04-17 00:42:31.482760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:31.483261 | orchestrator | Thursday 17 April 2025 00:42:31 +0000 (0:00:00.200) 0:00:31.829 ******** 2025-04-17 00:42:31.679099 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:31.679351 | orchestrator | 2025-04-17 00:42:31.679810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:31.681042 | orchestrator | Thursday 17 April 2025 00:42:31 +0000 (0:00:00.197) 0:00:32.026 ******** 2025-04-17 00:42:31.877343 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:31.877551 | orchestrator | 2025-04-17 00:42:31.878680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:31.879610 | orchestrator | Thursday 17 April 2025 00:42:31 +0000 (0:00:00.196) 0:00:32.222 ******** 2025-04-17 00:42:32.485456 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96) 2025-04-17 00:42:32.485787 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96) 2025-04-17 00:42:32.485821 | orchestrator | 2025-04-17 00:42:32.486343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:32.487043 | orchestrator | Thursday 17 April 2025 00:42:32 +0000 (0:00:00.610) 0:00:32.832 ******** 2025-04-17 00:42:33.246434 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2) 2025-04-17 00:42:33.246660 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2) 2025-04-17 00:42:33.246831 | orchestrator | 2025-04-17 00:42:33.247781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:33.248325 | orchestrator | Thursday 17 April 2025 00:42:33 +0000 (0:00:00.757) 0:00:33.590 ******** 2025-04-17 00:42:33.681227 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650) 2025-04-17 00:42:33.681624 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650) 2025-04-17 00:42:33.681667 | orchestrator | 2025-04-17 00:42:33.682455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:33.683067 | orchestrator | Thursday 17 April 2025 00:42:33 +0000 (0:00:00.437) 0:00:34.027 ******** 2025-04-17 00:42:34.140614 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700) 2025-04-17 00:42:34.141306 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700) 2025-04-17 00:42:34.142688 | orchestrator | 2025-04-17 00:42:34.143758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 00:42:34.143791 | orchestrator | Thursday 17 April 2025 00:42:34 +0000 (0:00:00.458) 0:00:34.486 ******** 2025-04-17 00:42:34.493106 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-17 00:42:34.493769 | orchestrator | 2025-04-17 00:42:34.497664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:34.885908 | orchestrator | Thursday 17 April 2025 00:42:34 +0000 (0:00:00.352) 0:00:34.839 ******** 2025-04-17 00:42:34.886220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-17 00:42:34.886321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-17 00:42:34.888259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-17 00:42:34.891347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-17 00:42:34.896362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-17 00:42:34.896424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-17 00:42:34.897219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-17 00:42:34.897977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-17 00:42:34.898812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-17 00:42:34.899242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-17 00:42:34.899953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-17 00:42:34.900368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-17 00:42:34.900831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-17 00:42:34.901400 | orchestrator | 2025-04-17 00:42:34.901829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:34.902405 | orchestrator | Thursday 17 April 2025 00:42:34 +0000 (0:00:00.392) 0:00:35.232 ******** 2025-04-17 00:42:35.087434 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:35.088234 | orchestrator | 2025-04-17 00:42:35.088745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:35.088873 | orchestrator | Thursday 17 April 2025 00:42:35 +0000 (0:00:00.202) 0:00:35.434 ******** 2025-04-17 00:42:35.284846 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:35.285821 | orchestrator | 2025-04-17 00:42:35.285924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:35.286159 | orchestrator | Thursday 17 April 2025 00:42:35 +0000 (0:00:00.196) 0:00:35.631 ******** 2025-04-17 00:42:35.487791 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:35.488062 | orchestrator | 2025-04-17 00:42:35.488773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:35.489348 | orchestrator | Thursday 17 April 2025 00:42:35 +0000 (0:00:00.203) 0:00:35.834 ******** 2025-04-17 00:42:35.686833 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:35.687856 | orchestrator | 2025-04-17 00:42:35.688385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:35.689342 | orchestrator | Thursday 17 April 2025 00:42:35 +0000 (0:00:00.198) 0:00:36.033 ******** 2025-04-17 00:42:36.252239 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:36.252686 | orchestrator | 2025-04-17 00:42:36.253134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:36.253529 | orchestrator | Thursday 17 April 2025 00:42:36 +0000 (0:00:00.565) 0:00:36.598 ******** 2025-04-17 00:42:36.451414 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:36.452030 | orchestrator | 2025-04-17 00:42:36.452541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:36.452883 | orchestrator | Thursday 17 April 2025 00:42:36 +0000 (0:00:00.199) 0:00:36.798 ******** 2025-04-17 00:42:36.653598 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:36.654262 | orchestrator | 2025-04-17 00:42:36.655225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:36.655688 | orchestrator | Thursday 17 April 2025 00:42:36 +0000 (0:00:00.202) 0:00:37.000 ******** 2025-04-17 00:42:36.841942 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:36.842415 | orchestrator | 2025-04-17 00:42:36.843613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:36.844378 | orchestrator | Thursday 17 April 2025 00:42:36 +0000 (0:00:00.188) 0:00:37.189 ******** 2025-04-17 00:42:37.466481 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-17 00:42:37.467949 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-17 00:42:37.470223 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-17 00:42:37.470954 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-17 00:42:37.470990 | orchestrator | 2025-04-17 00:42:37.471033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:37.471055 | orchestrator | Thursday 17 April 2025 00:42:37 +0000 (0:00:00.623) 0:00:37.812 ******** 2025-04-17 00:42:37.662303 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:37.663042 | orchestrator | 2025-04-17 00:42:37.663091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:37.663460 | orchestrator | Thursday 17 April 2025 00:42:37 +0000 (0:00:00.196) 0:00:38.008 ******** 2025-04-17 00:42:37.869478 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:37.869657 | orchestrator | 2025-04-17 00:42:37.870500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:37.871569 | orchestrator | Thursday 17 April 2025 00:42:37 +0000 (0:00:00.207) 0:00:38.216 ******** 2025-04-17 00:42:38.080395 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:38.080574 | orchestrator | 2025-04-17 00:42:38.081179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 00:42:38.084143 | orchestrator | Thursday 17 April 2025 00:42:38 +0000 (0:00:00.210) 0:00:38.426 ******** 2025-04-17 00:42:38.270061 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:38.270699 | orchestrator | 2025-04-17 00:42:38.271497 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-17 00:42:38.272543 | orchestrator | Thursday 17 April 2025 00:42:38 +0000 (0:00:00.190) 0:00:38.617 ******** 2025-04-17 00:42:38.445174 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-17 00:42:38.446399 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-17 00:42:38.446525 | orchestrator | 2025-04-17 00:42:38.447299 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-17 00:42:38.449725 | orchestrator | Thursday 17 April 2025 00:42:38 +0000 (0:00:00.174) 0:00:38.791 ******** 2025-04-17 00:42:38.639866 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:38.640084 | orchestrator | 2025-04-17 00:42:38.640611 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-17 00:42:38.641341 | orchestrator | Thursday 17 April 2025 00:42:38 +0000 (0:00:00.194) 0:00:38.986 ******** 2025-04-17 00:42:38.949656 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:38.949830 | orchestrator | 2025-04-17 00:42:38.950167 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-17 00:42:38.950951 | orchestrator | Thursday 17 April 2025 00:42:38 +0000 (0:00:00.309) 0:00:39.295 ******** 2025-04-17 00:42:39.094962 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:39.095680 | orchestrator | 2025-04-17 00:42:39.095737 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-17 00:42:39.096665 | orchestrator | Thursday 17 April 2025 00:42:39 +0000 (0:00:00.142) 0:00:39.438 ******** 2025-04-17 00:42:39.231395 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:42:39.231690 | orchestrator | 2025-04-17 00:42:39.232190 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-17 00:42:39.232820 | orchestrator | Thursday 17 April 2025 00:42:39 +0000 (0:00:00.138) 0:00:39.577 ******** 2025-04-17 00:42:39.417111 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}}) 2025-04-17 00:42:39.417288 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af980f31-aa48-52cf-851d-a23b8b791ab9'}}) 2025-04-17 00:42:39.417588 | orchestrator | 2025-04-17 00:42:39.417942 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-17 00:42:39.418930 | orchestrator | Thursday 17 April 2025 00:42:39 +0000 (0:00:00.185) 0:00:39.763 ******** 2025-04-17 00:42:39.578852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}})  2025-04-17 00:42:39.579672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af980f31-aa48-52cf-851d-a23b8b791ab9'}})  2025-04-17 00:42:39.580983 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:39.581205 | orchestrator | 2025-04-17 00:42:39.581678 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-17 00:42:39.582410 | orchestrator | Thursday 17 April 2025 00:42:39 +0000 (0:00:00.162) 0:00:39.925 ******** 2025-04-17 00:42:39.753887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}})  2025-04-17 00:42:39.755021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af980f31-aa48-52cf-851d-a23b8b791ab9'}})  2025-04-17 00:42:39.756375 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:39.756850 | orchestrator | 2025-04-17 00:42:39.757872 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-17 00:42:39.759182 | orchestrator | Thursday 17 April 2025 00:42:39 +0000 (0:00:00.174) 0:00:40.099 ******** 2025-04-17 00:42:39.916179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}})  2025-04-17 00:42:39.917154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af980f31-aa48-52cf-851d-a23b8b791ab9'}})  2025-04-17 00:42:39.917201 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:39.917988 | orchestrator | 2025-04-17 00:42:39.919595 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-17 00:42:39.920638 | orchestrator | Thursday 17 April 2025 00:42:39 +0000 (0:00:00.162) 0:00:40.262 ******** 2025-04-17 00:42:40.067928 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:42:40.070138 | orchestrator | 2025-04-17 00:42:40.071926 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-17 00:42:40.213595 | orchestrator | Thursday 17 April 2025 00:42:40 +0000 (0:00:00.149) 0:00:40.412 ******** 2025-04-17 00:42:40.213762 | orchestrator | ok: [testbed-node-5] 2025-04-17 00:42:40.213878 | orchestrator | 2025-04-17 00:42:40.214661 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-17 00:42:40.215692 | orchestrator | Thursday 17 April 2025 00:42:40 +0000 (0:00:00.145) 0:00:40.558 ******** 2025-04-17 00:42:40.348808 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:40.349904 | orchestrator | 2025-04-17 00:42:40.350125 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-17 00:42:40.350405 | orchestrator | Thursday 17 April 2025 00:42:40 +0000 (0:00:00.137) 0:00:40.695 ******** 2025-04-17 00:42:40.491965 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:40.492747 | orchestrator | 2025-04-17 00:42:40.493877 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-17 00:42:40.494687 | orchestrator | Thursday 17 April 2025 00:42:40 +0000 (0:00:00.141) 0:00:40.837 ******** 2025-04-17 00:42:40.815759 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:40.816577 | orchestrator | 2025-04-17 00:42:40.817855 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-17 00:42:40.819164 | orchestrator | Thursday 17 April 2025 00:42:40 +0000 (0:00:00.324) 0:00:41.161 ******** 2025-04-17 00:42:40.981486 | orchestrator | ok: [testbed-node-5] => { 2025-04-17 00:42:40.982350 | orchestrator |  "ceph_osd_devices": { 2025-04-17 00:42:40.983508 | orchestrator |  "sdb": { 2025-04-17 00:42:40.984770 | orchestrator |  "osd_lvm_uuid": "a9d35e4b-2444-59e0-b6b9-5664c21b8a9c" 2025-04-17 00:42:40.986433 | orchestrator |  }, 2025-04-17 00:42:40.986845 | orchestrator |  "sdc": { 2025-04-17 00:42:40.987920 | orchestrator |  "osd_lvm_uuid": "af980f31-aa48-52cf-851d-a23b8b791ab9" 2025-04-17 00:42:40.988601 | orchestrator |  } 2025-04-17 00:42:40.989383 | orchestrator |  } 2025-04-17 00:42:40.989829 | orchestrator | } 2025-04-17 00:42:40.990343 | orchestrator | 2025-04-17 00:42:40.990976 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-17 00:42:40.992300 | orchestrator | Thursday 17 April 2025 00:42:40 +0000 (0:00:00.166) 0:00:41.327 ******** 2025-04-17 00:42:41.119135 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:41.119596 | orchestrator | 2025-04-17 00:42:41.120954 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-17 00:42:41.121748 | orchestrator | Thursday 17 April 2025 00:42:41 +0000 (0:00:00.137) 0:00:41.465 ******** 2025-04-17 00:42:41.271557 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:41.272379 | orchestrator | 2025-04-17 00:42:41.273085 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-17 00:42:41.273926 | orchestrator | Thursday 17 April 2025 00:42:41 +0000 (0:00:00.152) 0:00:41.617 ******** 2025-04-17 00:42:41.413761 | orchestrator | skipping: [testbed-node-5] 2025-04-17 00:42:41.414147 | orchestrator | 2025-04-17 00:42:41.415266 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-17 00:42:41.417262 | orchestrator | Thursday 17 April 2025 00:42:41 +0000 (0:00:00.142) 0:00:41.760 ******** 2025-04-17 00:42:41.702549 | orchestrator | changed: [testbed-node-5] => { 2025-04-17 00:42:41.704051 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-17 00:42:41.704164 | orchestrator |  "ceph_osd_devices": { 2025-04-17 00:42:41.705410 | orchestrator |  "sdb": { 2025-04-17 00:42:41.706645 | orchestrator |  "osd_lvm_uuid": "a9d35e4b-2444-59e0-b6b9-5664c21b8a9c" 2025-04-17 00:42:41.707259 | orchestrator |  }, 2025-04-17 00:42:41.708490 | orchestrator |  "sdc": { 2025-04-17 00:42:41.709477 | orchestrator |  "osd_lvm_uuid": "af980f31-aa48-52cf-851d-a23b8b791ab9" 2025-04-17 00:42:41.710629 | orchestrator |  } 2025-04-17 00:42:41.711356 | orchestrator |  }, 2025-04-17 00:42:41.712263 | orchestrator |  "lvm_volumes": [ 2025-04-17 00:42:41.712844 | orchestrator |  { 2025-04-17 00:42:41.714181 | orchestrator |  "data": "osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c", 2025-04-17 00:42:41.714264 | orchestrator |  "data_vg": "ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c" 2025-04-17 00:42:41.715149 | orchestrator |  }, 2025-04-17 00:42:41.715709 | orchestrator |  { 2025-04-17 00:42:41.716291 | orchestrator |  "data": "osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9", 2025-04-17 00:42:41.717128 | orchestrator |  "data_vg": "ceph-af980f31-aa48-52cf-851d-a23b8b791ab9" 2025-04-17 00:42:41.717691 | orchestrator |  } 2025-04-17 00:42:41.718566 | orchestrator |  ] 2025-04-17 00:42:41.719113 | orchestrator |  } 2025-04-17 00:42:41.719939 | orchestrator | } 2025-04-17 00:42:41.720320 | orchestrator | 2025-04-17 00:42:41.720982 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-17 00:42:41.721543 | orchestrator | Thursday 17 April 2025 00:42:41 +0000 (0:00:00.288) 0:00:42.049 ******** 2025-04-17 00:42:42.843596 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-17 00:42:42.843802 | orchestrator | 2025-04-17 00:42:42.844346 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 00:42:42.845957 | orchestrator | 2025-04-17 00:42:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 00:42:42.846180 | orchestrator | 2025-04-17 00:42:42 | INFO  | Please wait and do not abort execution. 2025-04-17 00:42:42.846247 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-17 00:42:42.846313 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-17 00:42:42.847264 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-17 00:42:42.848338 | orchestrator | 2025-04-17 00:42:42.848878 | orchestrator | 2025-04-17 00:42:42.850841 | orchestrator | 2025-04-17 00:42:42.851031 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 00:42:42.851181 | orchestrator | Thursday 17 April 2025 00:42:42 +0000 (0:00:01.137) 0:00:43.187 ******** 2025-04-17 00:42:42.851712 | orchestrator | =============================================================================== 2025-04-17 00:42:42.853123 | orchestrator | Write configuration file ------------------------------------------------ 4.60s 2025-04-17 00:42:42.853946 | orchestrator | Add known links to the list of available block devices ------------------ 1.74s 2025-04-17 00:42:42.854151 | orchestrator | Add known partitions to the list of available block devices ------------- 1.46s 2025-04-17 00:42:42.854369 | orchestrator | Print configuration data ------------------------------------------------ 1.00s 2025-04-17 00:42:42.855085 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-04-17 00:42:42.855787 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-04-17 00:42:42.856468 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.79s 2025-04-17 00:42:42.856660 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-04-17 00:42:42.856853 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-04-17 00:42:42.857354 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-04-17 00:42:42.857592 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.67s 2025-04-17 00:42:42.857990 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-04-17 00:42:42.858202 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.63s 2025-04-17 00:42:42.858620 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-04-17 00:42:42.858650 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-04-17 00:42:42.859769 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.59s 2025-04-17 00:42:42.859963 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-04-17 00:42:42.860426 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2025-04-17 00:42:42.860548 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-04-17 00:42:42.861514 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.56s 2025-04-17 00:42:54.967584 | orchestrator | 2025-04-17 00:42:54 | INFO  | Task 921ab7bb-a9d0-4143-88db-b903d79f1e0e is running in background. Output coming soon. 2025-04-17 01:42:57.175773 | orchestrator | 2025-04-17 01:42:57 | INFO  | Task 2c021a98-b0e6-40fe-8056-c32bc0b256cd (ceph-create-lvm-devices) was prepared for execution. 2025-04-17 01:43:00.096413 | orchestrator | 2025-04-17 01:42:57 | INFO  | It takes a moment until task 2c021a98-b0e6-40fe-8056-c32bc0b256cd (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-17 01:43:00.096656 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-17 01:43:00.580342 | orchestrator | 2025-04-17 01:43:00.581107 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-17 01:43:00.581695 | orchestrator | 2025-04-17 01:43:00.584180 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-17 01:43:00.584700 | orchestrator | Thursday 17 April 2025 01:43:00 +0000 (0:00:00.418) 0:00:00.418 ******** 2025-04-17 01:43:00.819984 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-17 01:43:00.820335 | orchestrator | 2025-04-17 01:43:00.821598 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-17 01:43:00.822367 | orchestrator | Thursday 17 April 2025 01:43:00 +0000 (0:00:00.241) 0:00:00.660 ******** 2025-04-17 01:43:01.039730 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:01.039964 | orchestrator | 2025-04-17 01:43:01.040170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:01.041094 | orchestrator | Thursday 17 April 2025 01:43:01 +0000 (0:00:00.219) 0:00:00.879 ******** 2025-04-17 01:43:01.730981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-17 01:43:01.731568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-17 01:43:01.732347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-17 01:43:01.733231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-17 01:43:01.733682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-17 01:43:01.734296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-17 01:43:01.734838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-17 01:43:01.736300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-17 01:43:01.738572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-17 01:43:01.739052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-17 01:43:01.740031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-17 01:43:01.740798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-17 01:43:01.741291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-17 01:43:01.741897 | orchestrator | 2025-04-17 01:43:01.742575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:01.742960 | orchestrator | Thursday 17 April 2025 01:43:01 +0000 (0:00:00.688) 0:00:01.568 ******** 2025-04-17 01:43:01.925040 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:01.927211 | orchestrator | 2025-04-17 01:43:01.927274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:01.927308 | orchestrator | Thursday 17 April 2025 01:43:01 +0000 (0:00:00.193) 0:00:01.761 ******** 2025-04-17 01:43:02.147134 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:02.147408 | orchestrator | 2025-04-17 01:43:02.148526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:02.149127 | orchestrator | Thursday 17 April 2025 01:43:02 +0000 (0:00:00.224) 0:00:01.986 ******** 2025-04-17 01:43:02.338574 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:02.340259 | orchestrator | 2025-04-17 01:43:02.534330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:02.534558 | orchestrator | Thursday 17 April 2025 01:43:02 +0000 (0:00:00.190) 0:00:02.177 ******** 2025-04-17 01:43:02.534604 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:02.535065 | orchestrator | 2025-04-17 01:43:02.537526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:02.538742 | orchestrator | Thursday 17 April 2025 01:43:02 +0000 (0:00:00.197) 0:00:02.374 ******** 2025-04-17 01:43:02.728837 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:02.729281 | orchestrator | 2025-04-17 01:43:02.730356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:02.731261 | orchestrator | Thursday 17 April 2025 01:43:02 +0000 (0:00:00.194) 0:00:02.569 ******** 2025-04-17 01:43:02.930811 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:02.931134 | orchestrator | 2025-04-17 01:43:02.931995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:02.933079 | orchestrator | Thursday 17 April 2025 01:43:02 +0000 (0:00:00.201) 0:00:02.770 ******** 2025-04-17 01:43:03.118244 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:03.118532 | orchestrator | 2025-04-17 01:43:03.119512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:03.122168 | orchestrator | Thursday 17 April 2025 01:43:03 +0000 (0:00:00.186) 0:00:02.957 ******** 2025-04-17 01:43:03.316650 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:03.317130 | orchestrator | 2025-04-17 01:43:03.320135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:03.903697 | orchestrator | Thursday 17 April 2025 01:43:03 +0000 (0:00:00.198) 0:00:03.155 ******** 2025-04-17 01:43:03.903873 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6) 2025-04-17 01:43:03.904340 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6) 2025-04-17 01:43:03.905046 | orchestrator | 2025-04-17 01:43:03.908087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:04.794784 | orchestrator | Thursday 17 April 2025 01:43:03 +0000 (0:00:00.587) 0:00:03.743 ******** 2025-04-17 01:43:04.794956 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0) 2025-04-17 01:43:04.795154 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0) 2025-04-17 01:43:04.796135 | orchestrator | 2025-04-17 01:43:04.797073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:04.798407 | orchestrator | Thursday 17 April 2025 01:43:04 +0000 (0:00:00.888) 0:00:04.632 ******** 2025-04-17 01:43:05.201643 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb) 2025-04-17 01:43:05.203018 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb) 2025-04-17 01:43:05.203600 | orchestrator | 2025-04-17 01:43:05.205200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:05.640417 | orchestrator | Thursday 17 April 2025 01:43:05 +0000 (0:00:00.410) 0:00:05.042 ******** 2025-04-17 01:43:05.640673 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242) 2025-04-17 01:43:05.640791 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242) 2025-04-17 01:43:05.641822 | orchestrator | 2025-04-17 01:43:05.645563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:05.968251 | orchestrator | Thursday 17 April 2025 01:43:05 +0000 (0:00:00.437) 0:00:05.480 ******** 2025-04-17 01:43:05.968424 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-17 01:43:05.969124 | orchestrator | 2025-04-17 01:43:05.969628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:05.970371 | orchestrator | Thursday 17 April 2025 01:43:05 +0000 (0:00:00.328) 0:00:05.808 ******** 2025-04-17 01:43:06.436992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-17 01:43:06.437865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-17 01:43:06.437923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-17 01:43:06.438536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-17 01:43:06.438791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-17 01:43:06.442110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-17 01:43:06.442191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-17 01:43:06.442210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-17 01:43:06.442226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-17 01:43:06.442241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-17 01:43:06.442257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-17 01:43:06.442277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-17 01:43:06.442366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-17 01:43:06.443029 | orchestrator | 2025-04-17 01:43:06.443502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:06.443895 | orchestrator | Thursday 17 April 2025 01:43:06 +0000 (0:00:00.468) 0:00:06.277 ******** 2025-04-17 01:43:06.637818 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:06.638241 | orchestrator | 2025-04-17 01:43:06.638284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:06.638423 | orchestrator | Thursday 17 April 2025 01:43:06 +0000 (0:00:00.201) 0:00:06.478 ******** 2025-04-17 01:43:06.831679 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:06.831942 | orchestrator | 2025-04-17 01:43:06.832256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:06.832930 | orchestrator | Thursday 17 April 2025 01:43:06 +0000 (0:00:00.194) 0:00:06.672 ******** 2025-04-17 01:43:07.015195 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:07.015982 | orchestrator | 2025-04-17 01:43:07.016789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:07.017484 | orchestrator | Thursday 17 April 2025 01:43:07 +0000 (0:00:00.183) 0:00:06.855 ******** 2025-04-17 01:43:07.210812 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:07.211044 | orchestrator | 2025-04-17 01:43:07.211423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:07.211894 | orchestrator | Thursday 17 April 2025 01:43:07 +0000 (0:00:00.193) 0:00:07.049 ******** 2025-04-17 01:43:07.753486 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:07.754355 | orchestrator | 2025-04-17 01:43:07.756120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:07.756158 | orchestrator | Thursday 17 April 2025 01:43:07 +0000 (0:00:00.544) 0:00:07.593 ******** 2025-04-17 01:43:07.951614 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:07.951991 | orchestrator | 2025-04-17 01:43:07.953292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:07.953825 | orchestrator | Thursday 17 April 2025 01:43:07 +0000 (0:00:00.198) 0:00:07.792 ******** 2025-04-17 01:43:08.145000 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:08.145900 | orchestrator | 2025-04-17 01:43:08.150704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:08.151345 | orchestrator | Thursday 17 April 2025 01:43:08 +0000 (0:00:00.191) 0:00:07.984 ******** 2025-04-17 01:43:08.360617 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:08.361358 | orchestrator | 2025-04-17 01:43:08.361403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:08.361886 | orchestrator | Thursday 17 April 2025 01:43:08 +0000 (0:00:00.208) 0:00:08.193 ******** 2025-04-17 01:43:08.998068 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-17 01:43:08.998302 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-17 01:43:08.999407 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-17 01:43:09.000102 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-17 01:43:09.003981 | orchestrator | 2025-04-17 01:43:09.004606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:09.005512 | orchestrator | Thursday 17 April 2025 01:43:08 +0000 (0:00:00.645) 0:00:08.838 ******** 2025-04-17 01:43:09.202344 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:09.203113 | orchestrator | 2025-04-17 01:43:09.204352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:09.208155 | orchestrator | Thursday 17 April 2025 01:43:09 +0000 (0:00:00.204) 0:00:09.042 ******** 2025-04-17 01:43:09.403199 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:09.404642 | orchestrator | 2025-04-17 01:43:09.407576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:09.408728 | orchestrator | Thursday 17 April 2025 01:43:09 +0000 (0:00:00.199) 0:00:09.242 ******** 2025-04-17 01:43:09.616325 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:09.616576 | orchestrator | 2025-04-17 01:43:09.618213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:09.619894 | orchestrator | Thursday 17 April 2025 01:43:09 +0000 (0:00:00.212) 0:00:09.455 ******** 2025-04-17 01:43:09.798744 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:09.799001 | orchestrator | 2025-04-17 01:43:09.799695 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-17 01:43:09.800873 | orchestrator | Thursday 17 April 2025 01:43:09 +0000 (0:00:00.183) 0:00:09.638 ******** 2025-04-17 01:43:09.937712 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:09.938493 | orchestrator | 2025-04-17 01:43:09.939976 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-17 01:43:09.944245 | orchestrator | Thursday 17 April 2025 01:43:09 +0000 (0:00:00.139) 0:00:09.778 ******** 2025-04-17 01:43:10.143917 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '567181ad-d304-5248-b248-9710ecf6a56a'}}) 2025-04-17 01:43:10.144179 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}}) 2025-04-17 01:43:10.145073 | orchestrator | 2025-04-17 01:43:10.146173 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-17 01:43:10.152125 | orchestrator | Thursday 17 April 2025 01:43:10 +0000 (0:00:00.205) 0:00:09.983 ******** 2025-04-17 01:43:12.470540 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'}) 2025-04-17 01:43:12.470934 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}) 2025-04-17 01:43:12.471752 | orchestrator | 2025-04-17 01:43:12.472564 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-17 01:43:12.473275 | orchestrator | Thursday 17 April 2025 01:43:12 +0000 (0:00:02.327) 0:00:12.311 ******** 2025-04-17 01:43:12.619703 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:12.620054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:12.620541 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:12.621746 | orchestrator | 2025-04-17 01:43:12.622795 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-17 01:43:12.622999 | orchestrator | Thursday 17 April 2025 01:43:12 +0000 (0:00:00.147) 0:00:12.458 ******** 2025-04-17 01:43:13.983296 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'}) 2025-04-17 01:43:13.983822 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}) 2025-04-17 01:43:13.984624 | orchestrator | 2025-04-17 01:43:13.985738 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-17 01:43:13.985884 | orchestrator | Thursday 17 April 2025 01:43:13 +0000 (0:00:01.364) 0:00:13.823 ******** 2025-04-17 01:43:14.127397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:14.128218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:14.128302 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:14.128893 | orchestrator | 2025-04-17 01:43:14.129413 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-17 01:43:14.129481 | orchestrator | Thursday 17 April 2025 01:43:14 +0000 (0:00:00.145) 0:00:13.968 ******** 2025-04-17 01:43:14.248168 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:14.248325 | orchestrator | 2025-04-17 01:43:14.249786 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-17 01:43:14.250243 | orchestrator | Thursday 17 April 2025 01:43:14 +0000 (0:00:00.120) 0:00:14.089 ******** 2025-04-17 01:43:14.375357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:14.375762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:14.375804 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:14.376040 | orchestrator | 2025-04-17 01:43:14.376072 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-17 01:43:14.376460 | orchestrator | Thursday 17 April 2025 01:43:14 +0000 (0:00:00.127) 0:00:14.216 ******** 2025-04-17 01:43:14.503819 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:14.505654 | orchestrator | 2025-04-17 01:43:14.506596 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-17 01:43:14.506610 | orchestrator | Thursday 17 April 2025 01:43:14 +0000 (0:00:00.127) 0:00:14.344 ******** 2025-04-17 01:43:14.652145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:14.652344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:14.652967 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:14.653295 | orchestrator | 2025-04-17 01:43:14.653787 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-17 01:43:14.654219 | orchestrator | Thursday 17 April 2025 01:43:14 +0000 (0:00:00.149) 0:00:14.493 ******** 2025-04-17 01:43:14.895144 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:14.895609 | orchestrator | 2025-04-17 01:43:14.895626 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-17 01:43:14.896042 | orchestrator | Thursday 17 April 2025 01:43:14 +0000 (0:00:00.237) 0:00:14.731 ******** 2025-04-17 01:43:15.036726 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:15.036926 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:15.036957 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:15.037619 | orchestrator | 2025-04-17 01:43:15.039071 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-17 01:43:15.039197 | orchestrator | Thursday 17 April 2025 01:43:15 +0000 (0:00:00.145) 0:00:14.877 ******** 2025-04-17 01:43:15.150485 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:15.150686 | orchestrator | 2025-04-17 01:43:15.151461 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-17 01:43:15.152508 | orchestrator | Thursday 17 April 2025 01:43:15 +0000 (0:00:00.113) 0:00:14.991 ******** 2025-04-17 01:43:15.323735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:15.323949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:15.324513 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:15.324547 | orchestrator | 2025-04-17 01:43:15.325970 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-17 01:43:15.502976 | orchestrator | Thursday 17 April 2025 01:43:15 +0000 (0:00:00.172) 0:00:15.164 ******** 2025-04-17 01:43:15.503146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:15.503676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:15.504323 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:15.505004 | orchestrator | 2025-04-17 01:43:15.506620 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-17 01:43:15.509038 | orchestrator | Thursday 17 April 2025 01:43:15 +0000 (0:00:00.179) 0:00:15.343 ******** 2025-04-17 01:43:15.679489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:15.679739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:15.680398 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:15.681165 | orchestrator | 2025-04-17 01:43:15.681926 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-17 01:43:15.682620 | orchestrator | Thursday 17 April 2025 01:43:15 +0000 (0:00:00.174) 0:00:15.518 ******** 2025-04-17 01:43:15.812527 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:15.812894 | orchestrator | 2025-04-17 01:43:15.814101 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-17 01:43:15.817341 | orchestrator | Thursday 17 April 2025 01:43:15 +0000 (0:00:00.134) 0:00:15.652 ******** 2025-04-17 01:43:15.960629 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:15.961631 | orchestrator | 2025-04-17 01:43:15.961675 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-17 01:43:15.962320 | orchestrator | Thursday 17 April 2025 01:43:15 +0000 (0:00:00.148) 0:00:15.800 ******** 2025-04-17 01:43:16.100350 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:16.100686 | orchestrator | 2025-04-17 01:43:16.101622 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-17 01:43:16.102664 | orchestrator | Thursday 17 April 2025 01:43:16 +0000 (0:00:00.140) 0:00:15.940 ******** 2025-04-17 01:43:16.242596 | orchestrator | ok: [testbed-node-3] => { 2025-04-17 01:43:16.242808 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-17 01:43:16.243991 | orchestrator | } 2025-04-17 01:43:16.246960 | orchestrator | 2025-04-17 01:43:16.247061 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-17 01:43:16.247116 | orchestrator | Thursday 17 April 2025 01:43:16 +0000 (0:00:00.141) 0:00:16.082 ******** 2025-04-17 01:43:16.392842 | orchestrator | ok: [testbed-node-3] => { 2025-04-17 01:43:16.395400 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-17 01:43:16.396146 | orchestrator | } 2025-04-17 01:43:16.396830 | orchestrator | 2025-04-17 01:43:16.397403 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-17 01:43:16.398172 | orchestrator | Thursday 17 April 2025 01:43:16 +0000 (0:00:00.148) 0:00:16.231 ******** 2025-04-17 01:43:16.548564 | orchestrator | ok: [testbed-node-3] => { 2025-04-17 01:43:16.549230 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-17 01:43:16.550653 | orchestrator | } 2025-04-17 01:43:16.550939 | orchestrator | 2025-04-17 01:43:16.551706 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-17 01:43:16.552474 | orchestrator | Thursday 17 April 2025 01:43:16 +0000 (0:00:00.155) 0:00:16.386 ******** 2025-04-17 01:43:17.393380 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:17.393694 | orchestrator | 2025-04-17 01:43:17.393727 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-17 01:43:17.394235 | orchestrator | Thursday 17 April 2025 01:43:17 +0000 (0:00:00.846) 0:00:17.233 ******** 2025-04-17 01:43:17.867974 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:17.868567 | orchestrator | 2025-04-17 01:43:17.869631 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-17 01:43:17.870724 | orchestrator | Thursday 17 April 2025 01:43:17 +0000 (0:00:00.473) 0:00:17.707 ******** 2025-04-17 01:43:18.391705 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:18.393916 | orchestrator | 2025-04-17 01:43:18.394390 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-17 01:43:18.395132 | orchestrator | Thursday 17 April 2025 01:43:18 +0000 (0:00:00.521) 0:00:18.229 ******** 2025-04-17 01:43:18.533919 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:18.535498 | orchestrator | 2025-04-17 01:43:18.536241 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-17 01:43:18.537244 | orchestrator | Thursday 17 April 2025 01:43:18 +0000 (0:00:00.143) 0:00:18.373 ******** 2025-04-17 01:43:18.638601 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:18.640063 | orchestrator | 2025-04-17 01:43:18.640868 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-17 01:43:18.642119 | orchestrator | Thursday 17 April 2025 01:43:18 +0000 (0:00:00.103) 0:00:18.476 ******** 2025-04-17 01:43:18.746656 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:18.747741 | orchestrator | 2025-04-17 01:43:18.748676 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-17 01:43:18.749659 | orchestrator | Thursday 17 April 2025 01:43:18 +0000 (0:00:00.110) 0:00:18.586 ******** 2025-04-17 01:43:18.889833 | orchestrator | ok: [testbed-node-3] => { 2025-04-17 01:43:18.890350 | orchestrator |  "vgs_report": { 2025-04-17 01:43:18.890385 | orchestrator |  "vg": [] 2025-04-17 01:43:18.890408 | orchestrator |  } 2025-04-17 01:43:18.892249 | orchestrator | } 2025-04-17 01:43:18.893029 | orchestrator | 2025-04-17 01:43:18.894116 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-17 01:43:18.894416 | orchestrator | Thursday 17 April 2025 01:43:18 +0000 (0:00:00.141) 0:00:18.728 ******** 2025-04-17 01:43:19.024511 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:19.024696 | orchestrator | 2025-04-17 01:43:19.026068 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-17 01:43:19.027377 | orchestrator | Thursday 17 April 2025 01:43:19 +0000 (0:00:00.136) 0:00:18.864 ******** 2025-04-17 01:43:19.161893 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:19.162392 | orchestrator | 2025-04-17 01:43:19.163919 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-17 01:43:19.164966 | orchestrator | Thursday 17 April 2025 01:43:19 +0000 (0:00:00.137) 0:00:19.002 ******** 2025-04-17 01:43:19.306058 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:19.306275 | orchestrator | 2025-04-17 01:43:19.307670 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-17 01:43:19.310235 | orchestrator | Thursday 17 April 2025 01:43:19 +0000 (0:00:00.143) 0:00:19.145 ******** 2025-04-17 01:43:19.448077 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:19.449294 | orchestrator | 2025-04-17 01:43:19.449937 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-17 01:43:19.451112 | orchestrator | Thursday 17 April 2025 01:43:19 +0000 (0:00:00.142) 0:00:19.288 ******** 2025-04-17 01:43:19.728211 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:19.728875 | orchestrator | 2025-04-17 01:43:19.729715 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-17 01:43:19.731068 | orchestrator | Thursday 17 April 2025 01:43:19 +0000 (0:00:00.279) 0:00:19.567 ******** 2025-04-17 01:43:19.864042 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:19.865916 | orchestrator | 2025-04-17 01:43:19.997757 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-17 01:43:19.997872 | orchestrator | Thursday 17 April 2025 01:43:19 +0000 (0:00:00.135) 0:00:19.702 ******** 2025-04-17 01:43:19.997905 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:19.997977 | orchestrator | 2025-04-17 01:43:19.998636 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-17 01:43:19.999505 | orchestrator | Thursday 17 April 2025 01:43:19 +0000 (0:00:00.135) 0:00:19.837 ******** 2025-04-17 01:43:20.128253 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:20.132186 | orchestrator | 2025-04-17 01:43:20.132267 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-17 01:43:20.132325 | orchestrator | Thursday 17 April 2025 01:43:20 +0000 (0:00:00.130) 0:00:19.968 ******** 2025-04-17 01:43:20.255644 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:20.255794 | orchestrator | 2025-04-17 01:43:20.255807 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-17 01:43:20.255819 | orchestrator | Thursday 17 April 2025 01:43:20 +0000 (0:00:00.126) 0:00:20.095 ******** 2025-04-17 01:43:20.388711 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:20.389723 | orchestrator | 2025-04-17 01:43:20.390867 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-17 01:43:20.393116 | orchestrator | Thursday 17 April 2025 01:43:20 +0000 (0:00:00.133) 0:00:20.228 ******** 2025-04-17 01:43:20.531030 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:20.532265 | orchestrator | 2025-04-17 01:43:20.533070 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-17 01:43:20.533775 | orchestrator | Thursday 17 April 2025 01:43:20 +0000 (0:00:00.142) 0:00:20.370 ******** 2025-04-17 01:43:20.662713 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:20.663510 | orchestrator | 2025-04-17 01:43:20.664398 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-17 01:43:20.666709 | orchestrator | Thursday 17 April 2025 01:43:20 +0000 (0:00:00.132) 0:00:20.503 ******** 2025-04-17 01:43:20.789013 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:20.789248 | orchestrator | 2025-04-17 01:43:20.790127 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-17 01:43:20.790972 | orchestrator | Thursday 17 April 2025 01:43:20 +0000 (0:00:00.126) 0:00:20.629 ******** 2025-04-17 01:43:20.919529 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:20.920233 | orchestrator | 2025-04-17 01:43:20.921423 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-17 01:43:20.921625 | orchestrator | Thursday 17 April 2025 01:43:20 +0000 (0:00:00.130) 0:00:20.760 ******** 2025-04-17 01:43:21.084718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:21.086674 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:21.087907 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:21.088936 | orchestrator | 2025-04-17 01:43:21.089636 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-17 01:43:21.090244 | orchestrator | Thursday 17 April 2025 01:43:21 +0000 (0:00:00.162) 0:00:20.922 ******** 2025-04-17 01:43:21.252671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:21.252861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:21.253419 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:21.254089 | orchestrator | 2025-04-17 01:43:21.254847 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-17 01:43:21.255340 | orchestrator | Thursday 17 April 2025 01:43:21 +0000 (0:00:00.169) 0:00:21.092 ******** 2025-04-17 01:43:21.574389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:21.575476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:21.575914 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:21.576829 | orchestrator | 2025-04-17 01:43:21.577350 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-17 01:43:21.578152 | orchestrator | Thursday 17 April 2025 01:43:21 +0000 (0:00:00.322) 0:00:21.414 ******** 2025-04-17 01:43:21.732115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:21.732637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:21.734007 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:21.734416 | orchestrator | 2025-04-17 01:43:21.736105 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-17 01:43:21.907928 | orchestrator | Thursday 17 April 2025 01:43:21 +0000 (0:00:00.156) 0:00:21.571 ******** 2025-04-17 01:43:21.908089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:21.908231 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:21.909054 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:21.909568 | orchestrator | 2025-04-17 01:43:21.911712 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-17 01:43:22.069580 | orchestrator | Thursday 17 April 2025 01:43:21 +0000 (0:00:00.176) 0:00:21.748 ******** 2025-04-17 01:43:22.069745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:22.069883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:22.070656 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:22.071391 | orchestrator | 2025-04-17 01:43:22.071968 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-17 01:43:22.072549 | orchestrator | Thursday 17 April 2025 01:43:22 +0000 (0:00:00.161) 0:00:21.910 ******** 2025-04-17 01:43:22.231376 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:22.231677 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:22.232704 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:22.233947 | orchestrator | 2025-04-17 01:43:22.234647 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-17 01:43:22.235861 | orchestrator | Thursday 17 April 2025 01:43:22 +0000 (0:00:00.160) 0:00:22.071 ******** 2025-04-17 01:43:22.394286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:22.395726 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:22.396459 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:22.397794 | orchestrator | 2025-04-17 01:43:22.399256 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-17 01:43:22.400054 | orchestrator | Thursday 17 April 2025 01:43:22 +0000 (0:00:00.163) 0:00:22.234 ******** 2025-04-17 01:43:22.894645 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:22.894840 | orchestrator | 2025-04-17 01:43:22.895729 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-17 01:43:22.896639 | orchestrator | Thursday 17 April 2025 01:43:22 +0000 (0:00:00.499) 0:00:22.733 ******** 2025-04-17 01:43:23.395757 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:23.396676 | orchestrator | 2025-04-17 01:43:23.397286 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-17 01:43:23.397996 | orchestrator | Thursday 17 April 2025 01:43:23 +0000 (0:00:00.502) 0:00:23.236 ******** 2025-04-17 01:43:23.548271 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:43:23.548860 | orchestrator | 2025-04-17 01:43:23.549746 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-17 01:43:23.550264 | orchestrator | Thursday 17 April 2025 01:43:23 +0000 (0:00:00.151) 0:00:23.388 ******** 2025-04-17 01:43:23.739059 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'vg_name': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'}) 2025-04-17 01:43:23.739646 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'vg_name': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}) 2025-04-17 01:43:23.739686 | orchestrator | 2025-04-17 01:43:23.739711 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-17 01:43:24.094377 | orchestrator | Thursday 17 April 2025 01:43:23 +0000 (0:00:00.189) 0:00:23.578 ******** 2025-04-17 01:43:24.094593 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:24.095844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:24.096421 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:24.097344 | orchestrator | 2025-04-17 01:43:24.098695 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-17 01:43:24.099525 | orchestrator | Thursday 17 April 2025 01:43:24 +0000 (0:00:00.355) 0:00:23.933 ******** 2025-04-17 01:43:24.265744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:24.265925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:24.266215 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:24.266334 | orchestrator | 2025-04-17 01:43:24.267318 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-17 01:43:24.267378 | orchestrator | Thursday 17 April 2025 01:43:24 +0000 (0:00:00.173) 0:00:24.106 ******** 2025-04-17 01:43:24.435064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'})  2025-04-17 01:43:24.436052 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'})  2025-04-17 01:43:24.436091 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:43:24.436659 | orchestrator | 2025-04-17 01:43:24.438122 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-17 01:43:24.440958 | orchestrator | Thursday 17 April 2025 01:43:24 +0000 (0:00:00.167) 0:00:24.274 ******** 2025-04-17 01:43:25.085890 | orchestrator | ok: [testbed-node-3] => { 2025-04-17 01:43:25.089148 | orchestrator |  "lvm_report": { 2025-04-17 01:43:25.090092 | orchestrator |  "lv": [ 2025-04-17 01:43:25.090123 | orchestrator |  { 2025-04-17 01:43:25.090146 | orchestrator |  "lv_name": "osd-block-567181ad-d304-5248-b248-9710ecf6a56a", 2025-04-17 01:43:25.090384 | orchestrator |  "vg_name": "ceph-567181ad-d304-5248-b248-9710ecf6a56a" 2025-04-17 01:43:25.090874 | orchestrator |  }, 2025-04-17 01:43:25.091415 | orchestrator |  { 2025-04-17 01:43:25.092207 | orchestrator |  "lv_name": "osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e", 2025-04-17 01:43:25.092569 | orchestrator |  "vg_name": "ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e" 2025-04-17 01:43:25.093284 | orchestrator |  } 2025-04-17 01:43:25.093836 | orchestrator |  ], 2025-04-17 01:43:25.094506 | orchestrator |  "pv": [ 2025-04-17 01:43:25.095693 | orchestrator |  { 2025-04-17 01:43:25.095920 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-17 01:43:25.095982 | orchestrator |  "vg_name": "ceph-567181ad-d304-5248-b248-9710ecf6a56a" 2025-04-17 01:43:25.096537 | orchestrator |  }, 2025-04-17 01:43:25.096865 | orchestrator |  { 2025-04-17 01:43:25.097298 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-17 01:43:25.097635 | orchestrator |  "vg_name": "ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e" 2025-04-17 01:43:25.098126 | orchestrator |  } 2025-04-17 01:43:25.098482 | orchestrator |  ] 2025-04-17 01:43:25.098880 | orchestrator |  } 2025-04-17 01:43:25.099240 | orchestrator | } 2025-04-17 01:43:25.099602 | orchestrator | 2025-04-17 01:43:25.099934 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-17 01:43:25.100840 | orchestrator | 2025-04-17 01:43:25.101029 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-17 01:43:25.101161 | orchestrator | Thursday 17 April 2025 01:43:25 +0000 (0:00:00.650) 0:00:24.924 ******** 2025-04-17 01:43:25.657915 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-17 01:43:25.658672 | orchestrator | 2025-04-17 01:43:25.659287 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-17 01:43:25.659854 | orchestrator | Thursday 17 April 2025 01:43:25 +0000 (0:00:00.573) 0:00:25.497 ******** 2025-04-17 01:43:25.900032 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:25.900285 | orchestrator | 2025-04-17 01:43:25.901062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:25.903020 | orchestrator | Thursday 17 April 2025 01:43:25 +0000 (0:00:00.241) 0:00:25.739 ******** 2025-04-17 01:43:26.349357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-17 01:43:26.350143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-17 01:43:26.350193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-17 01:43:26.350971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-17 01:43:26.351644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-17 01:43:26.352168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-17 01:43:26.354676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-17 01:43:26.354926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-17 01:43:26.354955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-17 01:43:26.354972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-17 01:43:26.354993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-17 01:43:26.355573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-17 01:43:26.356059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-17 01:43:26.356677 | orchestrator | 2025-04-17 01:43:26.357318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:26.357677 | orchestrator | Thursday 17 April 2025 01:43:26 +0000 (0:00:00.449) 0:00:26.189 ******** 2025-04-17 01:43:26.558003 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:26.558412 | orchestrator | 2025-04-17 01:43:26.558489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:26.558515 | orchestrator | Thursday 17 April 2025 01:43:26 +0000 (0:00:00.208) 0:00:26.397 ******** 2025-04-17 01:43:26.757089 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:26.757547 | orchestrator | 2025-04-17 01:43:26.758120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:26.758774 | orchestrator | Thursday 17 April 2025 01:43:26 +0000 (0:00:00.200) 0:00:26.597 ******** 2025-04-17 01:43:26.967607 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:26.967893 | orchestrator | 2025-04-17 01:43:26.968510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:26.970783 | orchestrator | Thursday 17 April 2025 01:43:26 +0000 (0:00:00.209) 0:00:26.806 ******** 2025-04-17 01:43:27.159302 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:27.159600 | orchestrator | 2025-04-17 01:43:27.159638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:27.160167 | orchestrator | Thursday 17 April 2025 01:43:27 +0000 (0:00:00.192) 0:00:26.999 ******** 2025-04-17 01:43:27.342489 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:27.342769 | orchestrator | 2025-04-17 01:43:27.344622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:27.345992 | orchestrator | Thursday 17 April 2025 01:43:27 +0000 (0:00:00.183) 0:00:27.183 ******** 2025-04-17 01:43:27.560194 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:27.560492 | orchestrator | 2025-04-17 01:43:27.561287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:27.561864 | orchestrator | Thursday 17 April 2025 01:43:27 +0000 (0:00:00.218) 0:00:27.401 ******** 2025-04-17 01:43:27.760168 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:27.760369 | orchestrator | 2025-04-17 01:43:27.760913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:27.761567 | orchestrator | Thursday 17 April 2025 01:43:27 +0000 (0:00:00.199) 0:00:27.600 ******** 2025-04-17 01:43:28.301256 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:28.301600 | orchestrator | 2025-04-17 01:43:28.301642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:28.303829 | orchestrator | Thursday 17 April 2025 01:43:28 +0000 (0:00:00.539) 0:00:28.139 ******** 2025-04-17 01:43:28.708947 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a) 2025-04-17 01:43:28.709375 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a) 2025-04-17 01:43:28.710391 | orchestrator | 2025-04-17 01:43:28.711066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:28.713267 | orchestrator | Thursday 17 April 2025 01:43:28 +0000 (0:00:00.408) 0:00:28.548 ******** 2025-04-17 01:43:29.173540 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908) 2025-04-17 01:43:29.635847 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908) 2025-04-17 01:43:29.636061 | orchestrator | 2025-04-17 01:43:29.636084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:29.636100 | orchestrator | Thursday 17 April 2025 01:43:29 +0000 (0:00:00.459) 0:00:29.008 ******** 2025-04-17 01:43:29.636131 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89) 2025-04-17 01:43:29.636213 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89) 2025-04-17 01:43:29.637342 | orchestrator | 2025-04-17 01:43:29.640669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:30.058513 | orchestrator | Thursday 17 April 2025 01:43:29 +0000 (0:00:00.467) 0:00:29.476 ******** 2025-04-17 01:43:30.058713 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb) 2025-04-17 01:43:30.059380 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb) 2025-04-17 01:43:30.060310 | orchestrator | 2025-04-17 01:43:30.060983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:30.061573 | orchestrator | Thursday 17 April 2025 01:43:30 +0000 (0:00:00.420) 0:00:29.896 ******** 2025-04-17 01:43:30.404989 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-17 01:43:30.405610 | orchestrator | 2025-04-17 01:43:30.405658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:30.406862 | orchestrator | Thursday 17 April 2025 01:43:30 +0000 (0:00:00.348) 0:00:30.245 ******** 2025-04-17 01:43:30.852834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-17 01:43:30.854001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-17 01:43:30.854123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-17 01:43:30.854794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-17 01:43:30.855789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-17 01:43:30.856135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-17 01:43:30.856779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-17 01:43:30.858553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-17 01:43:30.858994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-17 01:43:30.859496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-17 01:43:30.859927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-17 01:43:30.860264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-17 01:43:30.860799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-17 01:43:30.861249 | orchestrator | 2025-04-17 01:43:30.861381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:30.861762 | orchestrator | Thursday 17 April 2025 01:43:30 +0000 (0:00:00.445) 0:00:30.690 ******** 2025-04-17 01:43:31.043075 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:31.043572 | orchestrator | 2025-04-17 01:43:31.044121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:31.044620 | orchestrator | Thursday 17 April 2025 01:43:31 +0000 (0:00:00.192) 0:00:30.883 ******** 2025-04-17 01:43:31.236962 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:31.237185 | orchestrator | 2025-04-17 01:43:31.802495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:31.802628 | orchestrator | Thursday 17 April 2025 01:43:31 +0000 (0:00:00.194) 0:00:31.077 ******** 2025-04-17 01:43:31.802664 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:32.004769 | orchestrator | 2025-04-17 01:43:32.004918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:32.004939 | orchestrator | Thursday 17 April 2025 01:43:31 +0000 (0:00:00.562) 0:00:31.640 ******** 2025-04-17 01:43:32.004974 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:32.005832 | orchestrator | 2025-04-17 01:43:32.005866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:32.005889 | orchestrator | Thursday 17 April 2025 01:43:31 +0000 (0:00:00.199) 0:00:31.840 ******** 2025-04-17 01:43:32.232053 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:32.233985 | orchestrator | 2025-04-17 01:43:32.234083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:32.234319 | orchestrator | Thursday 17 April 2025 01:43:32 +0000 (0:00:00.231) 0:00:32.072 ******** 2025-04-17 01:43:32.423039 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:32.423408 | orchestrator | 2025-04-17 01:43:32.424124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:32.425111 | orchestrator | Thursday 17 April 2025 01:43:32 +0000 (0:00:00.190) 0:00:32.263 ******** 2025-04-17 01:43:32.629788 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:32.629994 | orchestrator | 2025-04-17 01:43:32.630810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:32.631407 | orchestrator | Thursday 17 April 2025 01:43:32 +0000 (0:00:00.206) 0:00:32.469 ******** 2025-04-17 01:43:32.821903 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:32.822248 | orchestrator | 2025-04-17 01:43:32.822288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:32.823038 | orchestrator | Thursday 17 April 2025 01:43:32 +0000 (0:00:00.192) 0:00:32.662 ******** 2025-04-17 01:43:33.459210 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-17 01:43:33.460718 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-17 01:43:33.461403 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-17 01:43:33.461473 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-17 01:43:33.462122 | orchestrator | 2025-04-17 01:43:33.462954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:33.463379 | orchestrator | Thursday 17 April 2025 01:43:33 +0000 (0:00:00.636) 0:00:33.298 ******** 2025-04-17 01:43:33.657305 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:33.657596 | orchestrator | 2025-04-17 01:43:33.657632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:33.657959 | orchestrator | Thursday 17 April 2025 01:43:33 +0000 (0:00:00.199) 0:00:33.498 ******** 2025-04-17 01:43:33.852549 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:33.852793 | orchestrator | 2025-04-17 01:43:33.853927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:33.855974 | orchestrator | Thursday 17 April 2025 01:43:33 +0000 (0:00:00.194) 0:00:33.692 ******** 2025-04-17 01:43:34.053994 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:34.054686 | orchestrator | 2025-04-17 01:43:34.056980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:34.651539 | orchestrator | Thursday 17 April 2025 01:43:34 +0000 (0:00:00.200) 0:00:33.893 ******** 2025-04-17 01:43:34.651693 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:34.652161 | orchestrator | 2025-04-17 01:43:34.652295 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-17 01:43:34.652640 | orchestrator | Thursday 17 April 2025 01:43:34 +0000 (0:00:00.595) 0:00:34.488 ******** 2025-04-17 01:43:34.787737 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:34.788209 | orchestrator | 2025-04-17 01:43:34.788663 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-17 01:43:34.789527 | orchestrator | Thursday 17 April 2025 01:43:34 +0000 (0:00:00.139) 0:00:34.627 ******** 2025-04-17 01:43:35.020025 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ebc25b0-9278-5fc8-8be4-afb201f0a343'}}) 2025-04-17 01:43:35.020583 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b69f2859-f86c-57c9-a956-28222694e166'}}) 2025-04-17 01:43:35.021209 | orchestrator | 2025-04-17 01:43:35.022114 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-17 01:43:35.022487 | orchestrator | Thursday 17 April 2025 01:43:35 +0000 (0:00:00.230) 0:00:34.858 ******** 2025-04-17 01:43:37.034505 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'}) 2025-04-17 01:43:37.035657 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'}) 2025-04-17 01:43:37.036146 | orchestrator | 2025-04-17 01:43:37.038471 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-17 01:43:37.039243 | orchestrator | Thursday 17 April 2025 01:43:37 +0000 (0:00:02.015) 0:00:36.873 ******** 2025-04-17 01:43:37.197133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:37.199063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:37.199173 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:37.199194 | orchestrator | 2025-04-17 01:43:37.199215 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-17 01:43:37.199460 | orchestrator | Thursday 17 April 2025 01:43:37 +0000 (0:00:00.164) 0:00:37.038 ******** 2025-04-17 01:43:38.512899 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'}) 2025-04-17 01:43:38.513818 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'}) 2025-04-17 01:43:38.514551 | orchestrator | 2025-04-17 01:43:38.514591 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-17 01:43:38.515277 | orchestrator | Thursday 17 April 2025 01:43:38 +0000 (0:00:01.312) 0:00:38.350 ******** 2025-04-17 01:43:38.673715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:38.674587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:38.675767 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:38.676559 | orchestrator | 2025-04-17 01:43:38.677366 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-17 01:43:38.678416 | orchestrator | Thursday 17 April 2025 01:43:38 +0000 (0:00:00.162) 0:00:38.513 ******** 2025-04-17 01:43:38.805590 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:38.806688 | orchestrator | 2025-04-17 01:43:38.807853 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-17 01:43:38.808927 | orchestrator | Thursday 17 April 2025 01:43:38 +0000 (0:00:00.131) 0:00:38.645 ******** 2025-04-17 01:43:38.967561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:38.968312 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:38.969403 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:38.970523 | orchestrator | 2025-04-17 01:43:38.972044 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-17 01:43:38.972727 | orchestrator | Thursday 17 April 2025 01:43:38 +0000 (0:00:00.161) 0:00:38.807 ******** 2025-04-17 01:43:39.257552 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:39.258698 | orchestrator | 2025-04-17 01:43:39.259407 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-17 01:43:39.260350 | orchestrator | Thursday 17 April 2025 01:43:39 +0000 (0:00:00.289) 0:00:39.096 ******** 2025-04-17 01:43:39.446820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:39.447548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:39.448490 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:39.450475 | orchestrator | 2025-04-17 01:43:39.451231 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-17 01:43:39.451877 | orchestrator | Thursday 17 April 2025 01:43:39 +0000 (0:00:00.190) 0:00:39.287 ******** 2025-04-17 01:43:39.568720 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:39.569903 | orchestrator | 2025-04-17 01:43:39.569944 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-17 01:43:39.571004 | orchestrator | Thursday 17 April 2025 01:43:39 +0000 (0:00:00.122) 0:00:39.409 ******** 2025-04-17 01:43:39.751631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:39.751807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:39.752596 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:39.753197 | orchestrator | 2025-04-17 01:43:39.753673 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-17 01:43:39.754289 | orchestrator | Thursday 17 April 2025 01:43:39 +0000 (0:00:00.181) 0:00:39.590 ******** 2025-04-17 01:43:39.893250 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:39.894070 | orchestrator | 2025-04-17 01:43:39.894109 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-17 01:43:39.895806 | orchestrator | Thursday 17 April 2025 01:43:39 +0000 (0:00:00.138) 0:00:39.729 ******** 2025-04-17 01:43:40.053154 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:40.053401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:40.054578 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:40.056102 | orchestrator | 2025-04-17 01:43:40.056331 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-17 01:43:40.057177 | orchestrator | Thursday 17 April 2025 01:43:40 +0000 (0:00:00.164) 0:00:39.894 ******** 2025-04-17 01:43:40.223689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:40.223992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:40.225141 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:40.226416 | orchestrator | 2025-04-17 01:43:40.227223 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-17 01:43:40.227919 | orchestrator | Thursday 17 April 2025 01:43:40 +0000 (0:00:00.166) 0:00:40.061 ******** 2025-04-17 01:43:40.390817 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:40.391570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:40.392106 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:40.392583 | orchestrator | 2025-04-17 01:43:40.393065 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-17 01:43:40.393884 | orchestrator | Thursday 17 April 2025 01:43:40 +0000 (0:00:00.169) 0:00:40.230 ******** 2025-04-17 01:43:40.525250 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:40.526236 | orchestrator | 2025-04-17 01:43:40.527238 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-17 01:43:40.527275 | orchestrator | Thursday 17 April 2025 01:43:40 +0000 (0:00:00.135) 0:00:40.365 ******** 2025-04-17 01:43:40.660288 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:40.660533 | orchestrator | 2025-04-17 01:43:40.661115 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-17 01:43:40.661492 | orchestrator | Thursday 17 April 2025 01:43:40 +0000 (0:00:00.136) 0:00:40.501 ******** 2025-04-17 01:43:40.784579 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:40.784963 | orchestrator | 2025-04-17 01:43:40.785575 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-17 01:43:40.788176 | orchestrator | Thursday 17 April 2025 01:43:40 +0000 (0:00:00.123) 0:00:40.625 ******** 2025-04-17 01:43:40.921628 | orchestrator | ok: [testbed-node-4] => { 2025-04-17 01:43:40.922211 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-17 01:43:40.923073 | orchestrator | } 2025-04-17 01:43:40.923886 | orchestrator | 2025-04-17 01:43:40.925723 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-17 01:43:41.241000 | orchestrator | Thursday 17 April 2025 01:43:40 +0000 (0:00:00.137) 0:00:40.762 ******** 2025-04-17 01:43:41.241178 | orchestrator | ok: [testbed-node-4] => { 2025-04-17 01:43:41.243884 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-17 01:43:41.244077 | orchestrator | } 2025-04-17 01:43:41.244106 | orchestrator | 2025-04-17 01:43:41.244123 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-17 01:43:41.244146 | orchestrator | Thursday 17 April 2025 01:43:41 +0000 (0:00:00.317) 0:00:41.079 ******** 2025-04-17 01:43:41.384962 | orchestrator | ok: [testbed-node-4] => { 2025-04-17 01:43:41.385517 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-17 01:43:41.387894 | orchestrator | } 2025-04-17 01:43:41.389866 | orchestrator | 2025-04-17 01:43:41.391106 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-17 01:43:41.391170 | orchestrator | Thursday 17 April 2025 01:43:41 +0000 (0:00:00.145) 0:00:41.225 ******** 2025-04-17 01:43:41.920711 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:41.921302 | orchestrator | 2025-04-17 01:43:41.922247 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-17 01:43:41.922885 | orchestrator | Thursday 17 April 2025 01:43:41 +0000 (0:00:00.535) 0:00:41.761 ******** 2025-04-17 01:43:42.430827 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:42.431000 | orchestrator | 2025-04-17 01:43:42.431717 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-17 01:43:42.432300 | orchestrator | Thursday 17 April 2025 01:43:42 +0000 (0:00:00.508) 0:00:42.269 ******** 2025-04-17 01:43:42.924868 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:42.925207 | orchestrator | 2025-04-17 01:43:42.925231 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-17 01:43:42.925954 | orchestrator | Thursday 17 April 2025 01:43:42 +0000 (0:00:00.495) 0:00:42.765 ******** 2025-04-17 01:43:43.071535 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:43.071725 | orchestrator | 2025-04-17 01:43:43.071745 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-17 01:43:43.072119 | orchestrator | Thursday 17 April 2025 01:43:43 +0000 (0:00:00.146) 0:00:42.911 ******** 2025-04-17 01:43:43.168102 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:43.168292 | orchestrator | 2025-04-17 01:43:43.169412 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-17 01:43:43.169913 | orchestrator | Thursday 17 April 2025 01:43:43 +0000 (0:00:00.097) 0:00:43.008 ******** 2025-04-17 01:43:43.284004 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:43.284306 | orchestrator | 2025-04-17 01:43:43.286502 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-17 01:43:43.425763 | orchestrator | Thursday 17 April 2025 01:43:43 +0000 (0:00:00.114) 0:00:43.122 ******** 2025-04-17 01:43:43.425931 | orchestrator | ok: [testbed-node-4] => { 2025-04-17 01:43:43.426401 | orchestrator |  "vgs_report": { 2025-04-17 01:43:43.426473 | orchestrator |  "vg": [] 2025-04-17 01:43:43.426600 | orchestrator |  } 2025-04-17 01:43:43.427555 | orchestrator | } 2025-04-17 01:43:43.428112 | orchestrator | 2025-04-17 01:43:43.428927 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-17 01:43:43.429418 | orchestrator | Thursday 17 April 2025 01:43:43 +0000 (0:00:00.142) 0:00:43.265 ******** 2025-04-17 01:43:43.562146 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:43.562859 | orchestrator | 2025-04-17 01:43:43.563481 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-17 01:43:43.564137 | orchestrator | Thursday 17 April 2025 01:43:43 +0000 (0:00:00.137) 0:00:43.402 ******** 2025-04-17 01:43:43.857005 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:43.858282 | orchestrator | 2025-04-17 01:43:43.859201 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-17 01:43:43.860759 | orchestrator | Thursday 17 April 2025 01:43:43 +0000 (0:00:00.293) 0:00:43.696 ******** 2025-04-17 01:43:44.001626 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.001836 | orchestrator | 2025-04-17 01:43:44.002734 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-17 01:43:44.002838 | orchestrator | Thursday 17 April 2025 01:43:43 +0000 (0:00:00.144) 0:00:43.841 ******** 2025-04-17 01:43:44.143316 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.144078 | orchestrator | 2025-04-17 01:43:44.145229 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-17 01:43:44.146413 | orchestrator | Thursday 17 April 2025 01:43:44 +0000 (0:00:00.141) 0:00:43.983 ******** 2025-04-17 01:43:44.278879 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.279410 | orchestrator | 2025-04-17 01:43:44.280301 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-17 01:43:44.281235 | orchestrator | Thursday 17 April 2025 01:43:44 +0000 (0:00:00.133) 0:00:44.117 ******** 2025-04-17 01:43:44.410168 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.411113 | orchestrator | 2025-04-17 01:43:44.411943 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-17 01:43:44.412698 | orchestrator | Thursday 17 April 2025 01:43:44 +0000 (0:00:00.132) 0:00:44.249 ******** 2025-04-17 01:43:44.543950 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.544463 | orchestrator | 2025-04-17 01:43:44.545209 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-17 01:43:44.546152 | orchestrator | Thursday 17 April 2025 01:43:44 +0000 (0:00:00.133) 0:00:44.383 ******** 2025-04-17 01:43:44.685518 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.685782 | orchestrator | 2025-04-17 01:43:44.685871 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-17 01:43:44.686173 | orchestrator | Thursday 17 April 2025 01:43:44 +0000 (0:00:00.141) 0:00:44.525 ******** 2025-04-17 01:43:44.831763 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.832008 | orchestrator | 2025-04-17 01:43:44.832929 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-17 01:43:44.835016 | orchestrator | Thursday 17 April 2025 01:43:44 +0000 (0:00:00.146) 0:00:44.671 ******** 2025-04-17 01:43:44.965311 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:44.965988 | orchestrator | 2025-04-17 01:43:44.966618 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-17 01:43:44.969315 | orchestrator | Thursday 17 April 2025 01:43:44 +0000 (0:00:00.133) 0:00:44.805 ******** 2025-04-17 01:43:45.101069 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:45.101472 | orchestrator | 2025-04-17 01:43:45.102546 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-17 01:43:45.103214 | orchestrator | Thursday 17 April 2025 01:43:45 +0000 (0:00:00.135) 0:00:44.941 ******** 2025-04-17 01:43:45.239537 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:45.239833 | orchestrator | 2025-04-17 01:43:45.241147 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-17 01:43:45.244153 | orchestrator | Thursday 17 April 2025 01:43:45 +0000 (0:00:00.137) 0:00:45.079 ******** 2025-04-17 01:43:45.378992 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:45.379291 | orchestrator | 2025-04-17 01:43:45.380583 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-17 01:43:45.381248 | orchestrator | Thursday 17 April 2025 01:43:45 +0000 (0:00:00.139) 0:00:45.218 ******** 2025-04-17 01:43:45.531989 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:45.533630 | orchestrator | 2025-04-17 01:43:45.534879 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-17 01:43:45.534953 | orchestrator | Thursday 17 April 2025 01:43:45 +0000 (0:00:00.147) 0:00:45.366 ******** 2025-04-17 01:43:45.888834 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:45.889344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:45.890123 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:45.890999 | orchestrator | 2025-04-17 01:43:45.891859 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-17 01:43:45.892629 | orchestrator | Thursday 17 April 2025 01:43:45 +0000 (0:00:00.362) 0:00:45.729 ******** 2025-04-17 01:43:46.045792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:46.047521 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:46.049355 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:46.050177 | orchestrator | 2025-04-17 01:43:46.051136 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-17 01:43:46.051371 | orchestrator | Thursday 17 April 2025 01:43:46 +0000 (0:00:00.156) 0:00:45.885 ******** 2025-04-17 01:43:46.211892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:46.212509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:46.213174 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:46.213957 | orchestrator | 2025-04-17 01:43:46.214704 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-17 01:43:46.215295 | orchestrator | Thursday 17 April 2025 01:43:46 +0000 (0:00:00.166) 0:00:46.052 ******** 2025-04-17 01:43:46.377029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:46.377277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:46.378105 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:46.378725 | orchestrator | 2025-04-17 01:43:46.379746 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-17 01:43:46.380761 | orchestrator | Thursday 17 April 2025 01:43:46 +0000 (0:00:00.163) 0:00:46.215 ******** 2025-04-17 01:43:46.541618 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:46.542085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:46.542117 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:46.542139 | orchestrator | 2025-04-17 01:43:46.542537 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-17 01:43:46.542914 | orchestrator | Thursday 17 April 2025 01:43:46 +0000 (0:00:00.165) 0:00:46.381 ******** 2025-04-17 01:43:46.706002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:46.706622 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:46.709572 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:46.710287 | orchestrator | 2025-04-17 01:43:46.710326 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-17 01:43:46.710353 | orchestrator | Thursday 17 April 2025 01:43:46 +0000 (0:00:00.164) 0:00:46.545 ******** 2025-04-17 01:43:46.866581 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:46.867181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:46.868050 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:46.870332 | orchestrator | 2025-04-17 01:43:47.033799 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-17 01:43:47.033967 | orchestrator | Thursday 17 April 2025 01:43:46 +0000 (0:00:00.160) 0:00:46.706 ******** 2025-04-17 01:43:47.034007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:47.034166 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:47.034923 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:47.035856 | orchestrator | 2025-04-17 01:43:47.038468 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-17 01:43:47.555335 | orchestrator | Thursday 17 April 2025 01:43:47 +0000 (0:00:00.167) 0:00:46.874 ******** 2025-04-17 01:43:47.555551 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:47.555645 | orchestrator | 2025-04-17 01:43:47.555915 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-17 01:43:47.556911 | orchestrator | Thursday 17 April 2025 01:43:47 +0000 (0:00:00.518) 0:00:47.393 ******** 2025-04-17 01:43:48.050165 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:48.050361 | orchestrator | 2025-04-17 01:43:48.050667 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-17 01:43:48.051705 | orchestrator | Thursday 17 April 2025 01:43:48 +0000 (0:00:00.496) 0:00:47.889 ******** 2025-04-17 01:43:48.364983 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:43:48.365640 | orchestrator | 2025-04-17 01:43:48.366915 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-17 01:43:48.368036 | orchestrator | Thursday 17 April 2025 01:43:48 +0000 (0:00:00.314) 0:00:48.203 ******** 2025-04-17 01:43:48.551292 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'vg_name': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'}) 2025-04-17 01:43:48.552471 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'vg_name': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'}) 2025-04-17 01:43:48.555031 | orchestrator | 2025-04-17 01:43:48.556042 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-17 01:43:48.556083 | orchestrator | Thursday 17 April 2025 01:43:48 +0000 (0:00:00.188) 0:00:48.391 ******** 2025-04-17 01:43:48.718730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:48.719921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:48.721248 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:48.722983 | orchestrator | 2025-04-17 01:43:48.723737 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-17 01:43:48.724868 | orchestrator | Thursday 17 April 2025 01:43:48 +0000 (0:00:00.167) 0:00:48.559 ******** 2025-04-17 01:43:48.893494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:48.894068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:48.894462 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:48.894484 | orchestrator | 2025-04-17 01:43:48.895066 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-17 01:43:48.895468 | orchestrator | Thursday 17 April 2025 01:43:48 +0000 (0:00:00.175) 0:00:48.734 ******** 2025-04-17 01:43:49.064238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'})  2025-04-17 01:43:49.065167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'})  2025-04-17 01:43:49.065201 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:43:49.065853 | orchestrator | 2025-04-17 01:43:49.066730 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-17 01:43:49.067090 | orchestrator | Thursday 17 April 2025 01:43:49 +0000 (0:00:00.167) 0:00:48.901 ******** 2025-04-17 01:43:49.886941 | orchestrator | ok: [testbed-node-4] => { 2025-04-17 01:43:49.887142 | orchestrator |  "lvm_report": { 2025-04-17 01:43:49.888751 | orchestrator |  "lv": [ 2025-04-17 01:43:49.892138 | orchestrator |  { 2025-04-17 01:43:49.894185 | orchestrator |  "lv_name": "osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343", 2025-04-17 01:43:49.894249 | orchestrator |  "vg_name": "ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343" 2025-04-17 01:43:49.894687 | orchestrator |  }, 2025-04-17 01:43:49.895055 | orchestrator |  { 2025-04-17 01:43:49.895893 | orchestrator |  "lv_name": "osd-block-b69f2859-f86c-57c9-a956-28222694e166", 2025-04-17 01:43:49.896458 | orchestrator |  "vg_name": "ceph-b69f2859-f86c-57c9-a956-28222694e166" 2025-04-17 01:43:49.897151 | orchestrator |  } 2025-04-17 01:43:49.897274 | orchestrator |  ], 2025-04-17 01:43:49.897781 | orchestrator |  "pv": [ 2025-04-17 01:43:49.898278 | orchestrator |  { 2025-04-17 01:43:49.898683 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-17 01:43:49.899474 | orchestrator |  "vg_name": "ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343" 2025-04-17 01:43:49.899652 | orchestrator |  }, 2025-04-17 01:43:49.899928 | orchestrator |  { 2025-04-17 01:43:49.900189 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-17 01:43:49.900745 | orchestrator |  "vg_name": "ceph-b69f2859-f86c-57c9-a956-28222694e166" 2025-04-17 01:43:49.900858 | orchestrator |  } 2025-04-17 01:43:49.901147 | orchestrator |  ] 2025-04-17 01:43:49.901375 | orchestrator |  } 2025-04-17 01:43:49.901706 | orchestrator | } 2025-04-17 01:43:49.902077 | orchestrator | 2025-04-17 01:43:49.902201 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-17 01:43:49.902546 | orchestrator | 2025-04-17 01:43:49.902749 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-17 01:43:49.903162 | orchestrator | Thursday 17 April 2025 01:43:49 +0000 (0:00:00.825) 0:00:49.726 ******** 2025-04-17 01:43:50.132640 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-17 01:43:50.134474 | orchestrator | 2025-04-17 01:43:50.135213 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-17 01:43:50.135602 | orchestrator | Thursday 17 April 2025 01:43:50 +0000 (0:00:00.246) 0:00:49.973 ******** 2025-04-17 01:43:50.379696 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:43:50.380029 | orchestrator | 2025-04-17 01:43:50.380321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:50.381076 | orchestrator | Thursday 17 April 2025 01:43:50 +0000 (0:00:00.246) 0:00:50.219 ******** 2025-04-17 01:43:50.823669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-17 01:43:50.824688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-17 01:43:50.826078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-17 01:43:50.827196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-17 01:43:50.828574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-17 01:43:50.829372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-17 01:43:50.830299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-17 01:43:50.831327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-17 01:43:50.832398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-17 01:43:50.832916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-17 01:43:50.833727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-17 01:43:50.834291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-17 01:43:50.834755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-17 01:43:50.835168 | orchestrator | 2025-04-17 01:43:50.835687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:50.835962 | orchestrator | Thursday 17 April 2025 01:43:50 +0000 (0:00:00.444) 0:00:50.664 ******** 2025-04-17 01:43:51.022981 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:51.024014 | orchestrator | 2025-04-17 01:43:51.025017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:51.026333 | orchestrator | Thursday 17 April 2025 01:43:51 +0000 (0:00:00.198) 0:00:50.862 ******** 2025-04-17 01:43:51.218205 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:51.219262 | orchestrator | 2025-04-17 01:43:51.220632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:51.221737 | orchestrator | Thursday 17 April 2025 01:43:51 +0000 (0:00:00.195) 0:00:51.058 ******** 2025-04-17 01:43:51.441192 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:51.443539 | orchestrator | 2025-04-17 01:43:51.443635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:51.640375 | orchestrator | Thursday 17 April 2025 01:43:51 +0000 (0:00:00.221) 0:00:51.279 ******** 2025-04-17 01:43:51.640582 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:51.641919 | orchestrator | 2025-04-17 01:43:51.643186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:51.643809 | orchestrator | Thursday 17 April 2025 01:43:51 +0000 (0:00:00.200) 0:00:51.480 ******** 2025-04-17 01:43:51.841650 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:51.841850 | orchestrator | 2025-04-17 01:43:51.841878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:51.842971 | orchestrator | Thursday 17 April 2025 01:43:51 +0000 (0:00:00.200) 0:00:51.680 ******** 2025-04-17 01:43:52.410350 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:52.411070 | orchestrator | 2025-04-17 01:43:52.412261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:52.413414 | orchestrator | Thursday 17 April 2025 01:43:52 +0000 (0:00:00.568) 0:00:52.249 ******** 2025-04-17 01:43:52.608300 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:52.609043 | orchestrator | 2025-04-17 01:43:52.609089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:52.609989 | orchestrator | Thursday 17 April 2025 01:43:52 +0000 (0:00:00.198) 0:00:52.448 ******** 2025-04-17 01:43:52.805387 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:52.805653 | orchestrator | 2025-04-17 01:43:52.805716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:52.806149 | orchestrator | Thursday 17 April 2025 01:43:52 +0000 (0:00:00.197) 0:00:52.645 ******** 2025-04-17 01:43:53.219263 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96) 2025-04-17 01:43:53.219404 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96) 2025-04-17 01:43:53.220703 | orchestrator | 2025-04-17 01:43:53.221502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:53.223452 | orchestrator | Thursday 17 April 2025 01:43:53 +0000 (0:00:00.413) 0:00:53.059 ******** 2025-04-17 01:43:53.637548 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2) 2025-04-17 01:43:53.637966 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2) 2025-04-17 01:43:53.638913 | orchestrator | 2025-04-17 01:43:53.639327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:53.643276 | orchestrator | Thursday 17 April 2025 01:43:53 +0000 (0:00:00.418) 0:00:53.477 ******** 2025-04-17 01:43:54.074353 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650) 2025-04-17 01:43:54.075580 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650) 2025-04-17 01:43:54.075713 | orchestrator | 2025-04-17 01:43:54.076587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:54.076632 | orchestrator | Thursday 17 April 2025 01:43:54 +0000 (0:00:00.436) 0:00:53.914 ******** 2025-04-17 01:43:54.532787 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700) 2025-04-17 01:43:54.533498 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700) 2025-04-17 01:43:54.534829 | orchestrator | 2025-04-17 01:43:54.535980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-17 01:43:54.536919 | orchestrator | Thursday 17 April 2025 01:43:54 +0000 (0:00:00.455) 0:00:54.369 ******** 2025-04-17 01:43:54.863180 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-17 01:43:54.863757 | orchestrator | 2025-04-17 01:43:54.864643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:54.865395 | orchestrator | Thursday 17 April 2025 01:43:54 +0000 (0:00:00.334) 0:00:54.703 ******** 2025-04-17 01:43:55.325322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-17 01:43:55.325562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-17 01:43:55.326543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-17 01:43:55.328567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-17 01:43:55.330186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-17 01:43:55.330707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-17 01:43:55.331708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-17 01:43:55.332521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-17 01:43:55.333189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-17 01:43:55.333830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-17 01:43:55.334526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-17 01:43:55.335298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-17 01:43:55.335514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-17 01:43:55.336013 | orchestrator | 2025-04-17 01:43:55.336406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:55.336880 | orchestrator | Thursday 17 April 2025 01:43:55 +0000 (0:00:00.460) 0:00:55.164 ******** 2025-04-17 01:43:55.907768 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:55.908273 | orchestrator | 2025-04-17 01:43:55.908878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:55.909574 | orchestrator | Thursday 17 April 2025 01:43:55 +0000 (0:00:00.581) 0:00:55.746 ******** 2025-04-17 01:43:56.108913 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:56.109584 | orchestrator | 2025-04-17 01:43:56.110850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:56.111309 | orchestrator | Thursday 17 April 2025 01:43:56 +0000 (0:00:00.202) 0:00:55.949 ******** 2025-04-17 01:43:56.313560 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:56.314220 | orchestrator | 2025-04-17 01:43:56.315274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:56.316211 | orchestrator | Thursday 17 April 2025 01:43:56 +0000 (0:00:00.203) 0:00:56.153 ******** 2025-04-17 01:43:56.511030 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:56.511947 | orchestrator | 2025-04-17 01:43:56.512681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:56.513332 | orchestrator | Thursday 17 April 2025 01:43:56 +0000 (0:00:00.196) 0:00:56.350 ******** 2025-04-17 01:43:56.734990 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:56.735195 | orchestrator | 2025-04-17 01:43:56.735267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:56.735539 | orchestrator | Thursday 17 April 2025 01:43:56 +0000 (0:00:00.226) 0:00:56.576 ******** 2025-04-17 01:43:56.925537 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:56.925722 | orchestrator | 2025-04-17 01:43:56.925751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:56.926651 | orchestrator | Thursday 17 April 2025 01:43:56 +0000 (0:00:00.188) 0:00:56.764 ******** 2025-04-17 01:43:57.121643 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:57.122091 | orchestrator | 2025-04-17 01:43:57.122130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:57.122879 | orchestrator | Thursday 17 April 2025 01:43:57 +0000 (0:00:00.196) 0:00:56.961 ******** 2025-04-17 01:43:57.312939 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:57.313136 | orchestrator | 2025-04-17 01:43:57.314003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:57.316410 | orchestrator | Thursday 17 April 2025 01:43:57 +0000 (0:00:00.190) 0:00:57.152 ******** 2025-04-17 01:43:58.206607 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-17 01:43:58.206806 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-17 01:43:58.210276 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-17 01:43:58.402746 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-17 01:43:58.402886 | orchestrator | 2025-04-17 01:43:58.402902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:58.402914 | orchestrator | Thursday 17 April 2025 01:43:58 +0000 (0:00:00.891) 0:00:58.043 ******** 2025-04-17 01:43:58.402941 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:58.403120 | orchestrator | 2025-04-17 01:43:58.405993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:59.000585 | orchestrator | Thursday 17 April 2025 01:43:58 +0000 (0:00:00.198) 0:00:58.241 ******** 2025-04-17 01:43:59.000770 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:59.003610 | orchestrator | 2025-04-17 01:43:59.003677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:59.004047 | orchestrator | Thursday 17 April 2025 01:43:58 +0000 (0:00:00.596) 0:00:58.838 ******** 2025-04-17 01:43:59.224707 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:59.224883 | orchestrator | 2025-04-17 01:43:59.225052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-17 01:43:59.225628 | orchestrator | Thursday 17 April 2025 01:43:59 +0000 (0:00:00.224) 0:00:59.063 ******** 2025-04-17 01:43:59.462287 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:59.462538 | orchestrator | 2025-04-17 01:43:59.463494 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-17 01:43:59.463889 | orchestrator | Thursday 17 April 2025 01:43:59 +0000 (0:00:00.238) 0:00:59.302 ******** 2025-04-17 01:43:59.597652 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:43:59.598168 | orchestrator | 2025-04-17 01:43:59.600697 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-17 01:43:59.797573 | orchestrator | Thursday 17 April 2025 01:43:59 +0000 (0:00:00.134) 0:00:59.436 ******** 2025-04-17 01:43:59.797756 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}}) 2025-04-17 01:43:59.797839 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af980f31-aa48-52cf-851d-a23b8b791ab9'}}) 2025-04-17 01:43:59.798480 | orchestrator | 2025-04-17 01:43:59.799172 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-17 01:43:59.799943 | orchestrator | Thursday 17 April 2025 01:43:59 +0000 (0:00:00.200) 0:00:59.637 ******** 2025-04-17 01:44:01.840774 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}) 2025-04-17 01:44:01.841223 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'}) 2025-04-17 01:44:01.841609 | orchestrator | 2025-04-17 01:44:01.843045 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-17 01:44:01.844823 | orchestrator | Thursday 17 April 2025 01:44:01 +0000 (0:00:02.041) 0:01:01.679 ******** 2025-04-17 01:44:02.011048 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:02.011302 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:02.011370 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:02.011723 | orchestrator | 2025-04-17 01:44:02.012038 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-17 01:44:02.012393 | orchestrator | Thursday 17 April 2025 01:44:02 +0000 (0:00:00.171) 0:01:01.851 ******** 2025-04-17 01:44:03.254735 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}) 2025-04-17 01:44:03.254938 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'}) 2025-04-17 01:44:03.255347 | orchestrator | 2025-04-17 01:44:03.256048 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-17 01:44:03.257703 | orchestrator | Thursday 17 April 2025 01:44:03 +0000 (0:00:01.242) 0:01:03.093 ******** 2025-04-17 01:44:03.430409 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:03.430694 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:03.431975 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:03.433514 | orchestrator | 2025-04-17 01:44:03.434139 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-17 01:44:03.434221 | orchestrator | Thursday 17 April 2025 01:44:03 +0000 (0:00:00.175) 0:01:03.269 ******** 2025-04-17 01:44:03.754337 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:03.755133 | orchestrator | 2025-04-17 01:44:03.756268 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-17 01:44:03.757779 | orchestrator | Thursday 17 April 2025 01:44:03 +0000 (0:00:00.325) 0:01:03.594 ******** 2025-04-17 01:44:03.914012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:03.915625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:03.916960 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:03.918568 | orchestrator | 2025-04-17 01:44:03.919689 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-17 01:44:03.920569 | orchestrator | Thursday 17 April 2025 01:44:03 +0000 (0:00:00.158) 0:01:03.752 ******** 2025-04-17 01:44:04.067916 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:04.068313 | orchestrator | 2025-04-17 01:44:04.068337 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-17 01:44:04.069025 | orchestrator | Thursday 17 April 2025 01:44:04 +0000 (0:00:00.152) 0:01:03.905 ******** 2025-04-17 01:44:04.240026 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:04.240407 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:04.241121 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:04.241797 | orchestrator | 2025-04-17 01:44:04.242759 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-17 01:44:04.243398 | orchestrator | Thursday 17 April 2025 01:44:04 +0000 (0:00:00.174) 0:01:04.079 ******** 2025-04-17 01:44:04.384373 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:04.384660 | orchestrator | 2025-04-17 01:44:04.384806 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-17 01:44:04.386138 | orchestrator | Thursday 17 April 2025 01:44:04 +0000 (0:00:00.145) 0:01:04.224 ******** 2025-04-17 01:44:04.557677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:04.558150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:04.558454 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:04.559353 | orchestrator | 2025-04-17 01:44:04.559616 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-17 01:44:04.560257 | orchestrator | Thursday 17 April 2025 01:44:04 +0000 (0:00:00.174) 0:01:04.399 ******** 2025-04-17 01:44:04.706788 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:04.707196 | orchestrator | 2025-04-17 01:44:04.708557 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-17 01:44:04.709626 | orchestrator | Thursday 17 April 2025 01:44:04 +0000 (0:00:00.147) 0:01:04.546 ******** 2025-04-17 01:44:04.871384 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:04.871789 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:04.873059 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:04.874542 | orchestrator | 2025-04-17 01:44:04.874968 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-17 01:44:04.875991 | orchestrator | Thursday 17 April 2025 01:44:04 +0000 (0:00:00.164) 0:01:04.711 ******** 2025-04-17 01:44:05.051878 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:05.055345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:05.057319 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:05.057614 | orchestrator | 2025-04-17 01:44:05.057651 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-17 01:44:05.059986 | orchestrator | Thursday 17 April 2025 01:44:05 +0000 (0:00:00.177) 0:01:04.889 ******** 2025-04-17 01:44:05.215195 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:05.215936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:05.216839 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:05.217488 | orchestrator | 2025-04-17 01:44:05.218316 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-17 01:44:05.219122 | orchestrator | Thursday 17 April 2025 01:44:05 +0000 (0:00:00.166) 0:01:05.055 ******** 2025-04-17 01:44:05.354219 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:05.354402 | orchestrator | 2025-04-17 01:44:05.354777 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-17 01:44:05.354813 | orchestrator | Thursday 17 April 2025 01:44:05 +0000 (0:00:00.139) 0:01:05.194 ******** 2025-04-17 01:44:05.659406 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:05.659664 | orchestrator | 2025-04-17 01:44:05.660228 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-17 01:44:05.661448 | orchestrator | Thursday 17 April 2025 01:44:05 +0000 (0:00:00.305) 0:01:05.500 ******** 2025-04-17 01:44:05.804030 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:05.804237 | orchestrator | 2025-04-17 01:44:05.804732 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-17 01:44:05.805157 | orchestrator | Thursday 17 April 2025 01:44:05 +0000 (0:00:00.144) 0:01:05.644 ******** 2025-04-17 01:44:05.949377 | orchestrator | ok: [testbed-node-5] => { 2025-04-17 01:44:05.951827 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-17 01:44:05.952139 | orchestrator | } 2025-04-17 01:44:05.953691 | orchestrator | 2025-04-17 01:44:05.953773 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-17 01:44:05.953908 | orchestrator | Thursday 17 April 2025 01:44:05 +0000 (0:00:00.144) 0:01:05.788 ******** 2025-04-17 01:44:06.095548 | orchestrator | ok: [testbed-node-5] => { 2025-04-17 01:44:06.095802 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-17 01:44:06.095913 | orchestrator | } 2025-04-17 01:44:06.096490 | orchestrator | 2025-04-17 01:44:06.096818 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-17 01:44:06.097890 | orchestrator | Thursday 17 April 2025 01:44:06 +0000 (0:00:00.146) 0:01:05.935 ******** 2025-04-17 01:44:06.237788 | orchestrator | ok: [testbed-node-5] => { 2025-04-17 01:44:06.238119 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-17 01:44:06.239582 | orchestrator | } 2025-04-17 01:44:06.240893 | orchestrator | 2025-04-17 01:44:06.241820 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-17 01:44:06.242249 | orchestrator | Thursday 17 April 2025 01:44:06 +0000 (0:00:00.142) 0:01:06.077 ******** 2025-04-17 01:44:06.748128 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:06.748595 | orchestrator | 2025-04-17 01:44:06.748961 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-17 01:44:06.749479 | orchestrator | Thursday 17 April 2025 01:44:06 +0000 (0:00:00.511) 0:01:06.588 ******** 2025-04-17 01:44:07.254826 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:07.255022 | orchestrator | 2025-04-17 01:44:07.255053 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-17 01:44:07.255352 | orchestrator | Thursday 17 April 2025 01:44:07 +0000 (0:00:00.502) 0:01:07.091 ******** 2025-04-17 01:44:07.730096 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:07.731109 | orchestrator | 2025-04-17 01:44:07.732016 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-17 01:44:07.733522 | orchestrator | Thursday 17 April 2025 01:44:07 +0000 (0:00:00.478) 0:01:07.569 ******** 2025-04-17 01:44:07.873706 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:07.875500 | orchestrator | 2025-04-17 01:44:07.875684 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-17 01:44:07.875778 | orchestrator | Thursday 17 April 2025 01:44:07 +0000 (0:00:00.143) 0:01:07.713 ******** 2025-04-17 01:44:07.984014 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:07.985338 | orchestrator | 2025-04-17 01:44:07.985386 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-17 01:44:07.985740 | orchestrator | Thursday 17 April 2025 01:44:07 +0000 (0:00:00.109) 0:01:07.823 ******** 2025-04-17 01:44:08.099638 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:08.100552 | orchestrator | 2025-04-17 01:44:08.100851 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-17 01:44:08.102090 | orchestrator | Thursday 17 April 2025 01:44:08 +0000 (0:00:00.116) 0:01:07.940 ******** 2025-04-17 01:44:08.432010 | orchestrator | ok: [testbed-node-5] => { 2025-04-17 01:44:08.432216 | orchestrator |  "vgs_report": { 2025-04-17 01:44:08.433495 | orchestrator |  "vg": [] 2025-04-17 01:44:08.434770 | orchestrator |  } 2025-04-17 01:44:08.435253 | orchestrator | } 2025-04-17 01:44:08.436372 | orchestrator | 2025-04-17 01:44:08.437592 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-17 01:44:08.438103 | orchestrator | Thursday 17 April 2025 01:44:08 +0000 (0:00:00.331) 0:01:08.271 ******** 2025-04-17 01:44:08.561662 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:08.562137 | orchestrator | 2025-04-17 01:44:08.562877 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-17 01:44:08.563544 | orchestrator | Thursday 17 April 2025 01:44:08 +0000 (0:00:00.130) 0:01:08.402 ******** 2025-04-17 01:44:08.713033 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:08.713253 | orchestrator | 2025-04-17 01:44:08.716173 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-17 01:44:08.848689 | orchestrator | Thursday 17 April 2025 01:44:08 +0000 (0:00:00.150) 0:01:08.552 ******** 2025-04-17 01:44:08.848855 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:08.849650 | orchestrator | 2025-04-17 01:44:08.851125 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-17 01:44:08.852024 | orchestrator | Thursday 17 April 2025 01:44:08 +0000 (0:00:00.137) 0:01:08.689 ******** 2025-04-17 01:44:08.990149 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:08.990370 | orchestrator | 2025-04-17 01:44:08.990725 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-17 01:44:08.992594 | orchestrator | Thursday 17 April 2025 01:44:08 +0000 (0:00:00.141) 0:01:08.830 ******** 2025-04-17 01:44:09.131377 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:09.132237 | orchestrator | 2025-04-17 01:44:09.133771 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-17 01:44:09.134566 | orchestrator | Thursday 17 April 2025 01:44:09 +0000 (0:00:00.141) 0:01:08.972 ******** 2025-04-17 01:44:09.261126 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:09.262493 | orchestrator | 2025-04-17 01:44:09.263174 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-17 01:44:09.264168 | orchestrator | Thursday 17 April 2025 01:44:09 +0000 (0:00:00.129) 0:01:09.101 ******** 2025-04-17 01:44:09.400762 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:09.400995 | orchestrator | 2025-04-17 01:44:09.402170 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-17 01:44:09.403401 | orchestrator | Thursday 17 April 2025 01:44:09 +0000 (0:00:00.138) 0:01:09.240 ******** 2025-04-17 01:44:09.536609 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:09.537534 | orchestrator | 2025-04-17 01:44:09.538523 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-17 01:44:09.539799 | orchestrator | Thursday 17 April 2025 01:44:09 +0000 (0:00:00.136) 0:01:09.377 ******** 2025-04-17 01:44:09.668358 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:09.669883 | orchestrator | 2025-04-17 01:44:09.670348 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-17 01:44:09.671464 | orchestrator | Thursday 17 April 2025 01:44:09 +0000 (0:00:00.130) 0:01:09.508 ******** 2025-04-17 01:44:09.802145 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:09.803105 | orchestrator | 2025-04-17 01:44:09.803143 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-17 01:44:09.803215 | orchestrator | Thursday 17 April 2025 01:44:09 +0000 (0:00:00.133) 0:01:09.641 ******** 2025-04-17 01:44:09.950940 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:09.951148 | orchestrator | 2025-04-17 01:44:09.951815 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-17 01:44:09.953179 | orchestrator | Thursday 17 April 2025 01:44:09 +0000 (0:00:00.149) 0:01:09.790 ******** 2025-04-17 01:44:10.279533 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:10.279751 | orchestrator | 2025-04-17 01:44:10.280584 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-17 01:44:10.281467 | orchestrator | Thursday 17 April 2025 01:44:10 +0000 (0:00:00.327) 0:01:10.118 ******** 2025-04-17 01:44:10.404932 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:10.405460 | orchestrator | 2025-04-17 01:44:10.405798 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-17 01:44:10.406577 | orchestrator | Thursday 17 April 2025 01:44:10 +0000 (0:00:00.127) 0:01:10.245 ******** 2025-04-17 01:44:10.544334 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:10.545005 | orchestrator | 2025-04-17 01:44:10.545874 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-17 01:44:10.546800 | orchestrator | Thursday 17 April 2025 01:44:10 +0000 (0:00:00.139) 0:01:10.385 ******** 2025-04-17 01:44:10.723334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:10.723717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:10.725139 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:10.726096 | orchestrator | 2025-04-17 01:44:10.727391 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-17 01:44:10.728800 | orchestrator | Thursday 17 April 2025 01:44:10 +0000 (0:00:00.179) 0:01:10.564 ******** 2025-04-17 01:44:10.880765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:10.880994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:10.882576 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:10.885077 | orchestrator | 2025-04-17 01:44:10.885260 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-17 01:44:10.886148 | orchestrator | Thursday 17 April 2025 01:44:10 +0000 (0:00:00.156) 0:01:10.720 ******** 2025-04-17 01:44:11.040998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:11.041712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:11.042859 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:11.044116 | orchestrator | 2025-04-17 01:44:11.044978 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-17 01:44:11.046303 | orchestrator | Thursday 17 April 2025 01:44:11 +0000 (0:00:00.160) 0:01:10.880 ******** 2025-04-17 01:44:11.189788 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:11.190074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:11.190896 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:11.191948 | orchestrator | 2025-04-17 01:44:11.193280 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-17 01:44:11.193951 | orchestrator | Thursday 17 April 2025 01:44:11 +0000 (0:00:00.149) 0:01:11.030 ******** 2025-04-17 01:44:11.347060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:11.347704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:11.350495 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:11.352730 | orchestrator | 2025-04-17 01:44:11.352795 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-17 01:44:11.352821 | orchestrator | Thursday 17 April 2025 01:44:11 +0000 (0:00:00.156) 0:01:11.186 ******** 2025-04-17 01:44:11.513306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:11.513682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:11.513870 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:11.514803 | orchestrator | 2025-04-17 01:44:11.515835 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-17 01:44:11.516061 | orchestrator | Thursday 17 April 2025 01:44:11 +0000 (0:00:00.166) 0:01:11.353 ******** 2025-04-17 01:44:11.682562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:11.682808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:11.682888 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:11.683610 | orchestrator | 2025-04-17 01:44:11.684349 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-17 01:44:11.684550 | orchestrator | Thursday 17 April 2025 01:44:11 +0000 (0:00:00.169) 0:01:11.522 ******** 2025-04-17 01:44:11.847103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:11.848701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:11.851249 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:11.851618 | orchestrator | 2025-04-17 01:44:11.851657 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-17 01:44:11.851680 | orchestrator | Thursday 17 April 2025 01:44:11 +0000 (0:00:00.164) 0:01:11.686 ******** 2025-04-17 01:44:12.519517 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:12.520038 | orchestrator | 2025-04-17 01:44:12.520578 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-17 01:44:12.521253 | orchestrator | Thursday 17 April 2025 01:44:12 +0000 (0:00:00.670) 0:01:12.357 ******** 2025-04-17 01:44:13.023101 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:13.023303 | orchestrator | 2025-04-17 01:44:13.024014 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-17 01:44:13.024309 | orchestrator | Thursday 17 April 2025 01:44:13 +0000 (0:00:00.505) 0:01:12.863 ******** 2025-04-17 01:44:13.173335 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:13.174208 | orchestrator | 2025-04-17 01:44:13.175065 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-17 01:44:13.175259 | orchestrator | Thursday 17 April 2025 01:44:13 +0000 (0:00:00.148) 0:01:13.011 ******** 2025-04-17 01:44:13.355156 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'vg_name': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}) 2025-04-17 01:44:13.355352 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'vg_name': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'}) 2025-04-17 01:44:13.356255 | orchestrator | 2025-04-17 01:44:13.356533 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-17 01:44:13.358893 | orchestrator | Thursday 17 April 2025 01:44:13 +0000 (0:00:00.183) 0:01:13.195 ******** 2025-04-17 01:44:13.516343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:13.516675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:13.517468 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:13.518537 | orchestrator | 2025-04-17 01:44:13.519212 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-17 01:44:13.521999 | orchestrator | Thursday 17 April 2025 01:44:13 +0000 (0:00:00.161) 0:01:13.356 ******** 2025-04-17 01:44:13.674621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:13.675015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:13.675117 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:13.675683 | orchestrator | 2025-04-17 01:44:13.676482 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-17 01:44:13.677360 | orchestrator | Thursday 17 April 2025 01:44:13 +0000 (0:00:00.158) 0:01:13.515 ******** 2025-04-17 01:44:13.836792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'})  2025-04-17 01:44:13.837011 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'})  2025-04-17 01:44:13.837863 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:13.838337 | orchestrator | 2025-04-17 01:44:13.839586 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-17 01:44:13.839739 | orchestrator | Thursday 17 April 2025 01:44:13 +0000 (0:00:00.161) 0:01:13.677 ******** 2025-04-17 01:44:14.444747 | orchestrator | ok: [testbed-node-5] => { 2025-04-17 01:44:14.445170 | orchestrator |  "lvm_report": { 2025-04-17 01:44:14.445855 | orchestrator |  "lv": [ 2025-04-17 01:44:14.447017 | orchestrator |  { 2025-04-17 01:44:14.447835 | orchestrator |  "lv_name": "osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c", 2025-04-17 01:44:14.449255 | orchestrator |  "vg_name": "ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c" 2025-04-17 01:44:14.450527 | orchestrator |  }, 2025-04-17 01:44:14.451844 | orchestrator |  { 2025-04-17 01:44:14.452319 | orchestrator |  "lv_name": "osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9", 2025-04-17 01:44:14.453527 | orchestrator |  "vg_name": "ceph-af980f31-aa48-52cf-851d-a23b8b791ab9" 2025-04-17 01:44:14.454390 | orchestrator |  } 2025-04-17 01:44:14.455141 | orchestrator |  ], 2025-04-17 01:44:14.456189 | orchestrator |  "pv": [ 2025-04-17 01:44:14.456649 | orchestrator |  { 2025-04-17 01:44:14.457560 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-17 01:44:14.458772 | orchestrator |  "vg_name": "ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c" 2025-04-17 01:44:14.459161 | orchestrator |  }, 2025-04-17 01:44:14.460068 | orchestrator |  { 2025-04-17 01:44:14.460505 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-17 01:44:14.461332 | orchestrator |  "vg_name": "ceph-af980f31-aa48-52cf-851d-a23b8b791ab9" 2025-04-17 01:44:14.461882 | orchestrator |  } 2025-04-17 01:44:14.462860 | orchestrator |  ] 2025-04-17 01:44:14.463599 | orchestrator |  } 2025-04-17 01:44:14.464286 | orchestrator | } 2025-04-17 01:44:14.464829 | orchestrator | 2025-04-17 01:44:14.465362 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:44:14.466205 | orchestrator | 2025-04-17 01:44:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 01:44:14.466332 | orchestrator | 2025-04-17 01:44:14 | INFO  | Please wait and do not abort execution. 2025-04-17 01:44:14.467011 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-17 01:44:14.467856 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-17 01:44:14.468399 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-17 01:44:14.469064 | orchestrator | 2025-04-17 01:44:14.469539 | orchestrator | 2025-04-17 01:44:14.470201 | orchestrator | 2025-04-17 01:44:14.470616 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:44:14.471347 | orchestrator | Thursday 17 April 2025 01:44:14 +0000 (0:00:00.606) 0:01:14.284 ******** 2025-04-17 01:44:14.471847 | orchestrator | =============================================================================== 2025-04-17 01:44:14.472343 | orchestrator | Create block VGs -------------------------------------------------------- 6.39s 2025-04-17 01:44:14.472910 | orchestrator | Create block LVs -------------------------------------------------------- 3.92s 2025-04-17 01:44:14.473439 | orchestrator | Print LVM report data --------------------------------------------------- 2.08s 2025-04-17 01:44:14.473934 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.89s 2025-04-17 01:44:14.474233 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.69s 2025-04-17 01:44:14.474915 | orchestrator | Add known links to the list of available block devices ------------------ 1.58s 2025-04-17 01:44:14.475378 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-04-17 01:44:14.476019 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.50s 2025-04-17 01:44:14.477538 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.49s 2025-04-17 01:44:14.478542 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2025-04-17 01:44:14.479352 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.06s 2025-04-17 01:44:14.480144 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-04-17 01:44:14.481177 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-04-17 01:44:14.481999 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-04-17 01:44:14.482876 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.70s 2025-04-17 01:44:14.483891 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.68s 2025-04-17 01:44:14.484453 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.65s 2025-04-17 01:44:14.485086 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-04-17 01:44:14.486110 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.64s 2025-04-17 01:44:14.486780 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-04-17 01:44:16.296056 | orchestrator | 2025-04-17 01:44:16 | INFO  | Task 5f228231-5740-418f-90a1-6939140a2dc2 (facts) was prepared for execution. 2025-04-17 01:44:19.406748 | orchestrator | 2025-04-17 01:44:16 | INFO  | It takes a moment until task 5f228231-5740-418f-90a1-6939140a2dc2 (facts) has been started and output is visible here. 2025-04-17 01:44:19.406899 | orchestrator | 2025-04-17 01:44:19.408145 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-17 01:44:19.408526 | orchestrator | 2025-04-17 01:44:19.410623 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-17 01:44:19.413051 | orchestrator | Thursday 17 April 2025 01:44:19 +0000 (0:00:00.203) 0:00:00.203 ******** 2025-04-17 01:44:20.770430 | orchestrator | ok: [testbed-manager] 2025-04-17 01:44:20.771083 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:44:20.771959 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:44:20.772724 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:44:20.773257 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:44:20.773956 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:20.774852 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:44:20.775114 | orchestrator | 2025-04-17 01:44:20.775577 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-17 01:44:20.776226 | orchestrator | Thursday 17 April 2025 01:44:20 +0000 (0:00:01.363) 0:00:01.567 ******** 2025-04-17 01:44:20.937510 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:44:21.014958 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:44:21.092513 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:44:21.169363 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:44:21.243967 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:44:21.948106 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:44:21.948823 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:21.952106 | orchestrator | 2025-04-17 01:44:21.952277 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-17 01:44:21.953324 | orchestrator | 2025-04-17 01:44:21.956687 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-17 01:44:21.957258 | orchestrator | Thursday 17 April 2025 01:44:21 +0000 (0:00:01.178) 0:00:02.746 ******** 2025-04-17 01:44:26.546492 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:44:26.547542 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:44:26.548611 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:44:26.548672 | orchestrator | ok: [testbed-manager] 2025-04-17 01:44:26.551932 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:44:26.552145 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:44:26.553272 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:44:26.553299 | orchestrator | 2025-04-17 01:44:26.553316 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-17 01:44:26.553333 | orchestrator | 2025-04-17 01:44:26.553349 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-17 01:44:26.553370 | orchestrator | Thursday 17 April 2025 01:44:26 +0000 (0:00:04.600) 0:00:07.346 ******** 2025-04-17 01:44:26.859579 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:44:26.934841 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:44:27.007713 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:44:27.082153 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:44:27.156573 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:44:27.195214 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:44:27.196087 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:44:27.196125 | orchestrator | 2025-04-17 01:44:27.197110 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:44:27.197158 | orchestrator | 2025-04-17 01:44:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-17 01:44:27.197799 | orchestrator | 2025-04-17 01:44:27 | INFO  | Please wait and do not abort execution. 2025-04-17 01:44:27.197842 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:44:27.198127 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:44:27.198519 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:44:27.199263 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:44:27.199995 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:44:27.200300 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:44:27.201145 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:44:27.201616 | orchestrator | 2025-04-17 01:44:27.202518 | orchestrator | Thursday 17 April 2025 01:44:27 +0000 (0:00:00.648) 0:00:07.994 ******** 2025-04-17 01:44:27.202902 | orchestrator | =============================================================================== 2025-04-17 01:44:27.203702 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.60s 2025-04-17 01:44:27.204321 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2025-04-17 01:44:27.204733 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-04-17 01:44:27.205032 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2025-04-17 01:44:27.729882 | orchestrator | 2025-04-17 01:44:27.733018 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Apr 17 01:44:27 UTC 2025 2025-04-17 01:44:29.089925 | orchestrator | 2025-04-17 01:44:29.090138 | orchestrator | 2025-04-17 01:44:29 | INFO  | Collection nutshell is prepared for execution 2025-04-17 01:44:29.094117 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [0] - dotfiles 2025-04-17 01:44:29.094198 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [0] - homer 2025-04-17 01:44:29.095484 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [0] - netdata 2025-04-17 01:44:29.095519 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [0] - openstackclient 2025-04-17 01:44:29.095534 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [0] - phpmyadmin 2025-04-17 01:44:29.095548 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [0] - common 2025-04-17 01:44:29.095570 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [1] -- loadbalancer 2025-04-17 01:44:29.095656 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [2] --- opensearch 2025-04-17 01:44:29.095679 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [2] --- mariadb-ng 2025-04-17 01:44:29.096464 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [3] ---- horizon 2025-04-17 01:44:29.096514 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [3] ---- keystone 2025-04-17 01:44:29.096533 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [4] ----- neutron 2025-04-17 01:44:29.096548 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [5] ------ wait-for-nova 2025-04-17 01:44:29.096592 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [5] ------ octavia 2025-04-17 01:44:29.096617 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [4] ----- barbican 2025-04-17 01:44:29.096738 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [4] ----- designate 2025-04-17 01:44:29.096761 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [4] ----- ironic 2025-04-17 01:44:29.096931 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [4] ----- placement 2025-04-17 01:44:29.096956 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [4] ----- magnum 2025-04-17 01:44:29.096972 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [1] -- openvswitch 2025-04-17 01:44:29.096987 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [2] --- ovn 2025-04-17 01:44:29.097008 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [1] -- memcached 2025-04-17 01:44:29.097095 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [1] -- redis 2025-04-17 01:44:29.097113 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [1] -- rabbitmq-ng 2025-04-17 01:44:29.097131 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [0] - kubernetes 2025-04-17 01:44:29.097218 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [1] -- kubeconfig 2025-04-17 01:44:29.097351 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [1] -- copy-kubeconfig 2025-04-17 01:44:29.097376 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [0] - ceph 2025-04-17 01:44:29.098640 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [1] -- ceph-pools 2025-04-17 01:44:29.099000 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [2] --- copy-ceph-keys 2025-04-17 01:44:29.099032 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [3] ---- cephclient 2025-04-17 01:44:29.099220 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-17 01:44:29.099246 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [4] ----- wait-for-keystone 2025-04-17 01:44:29.099262 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-17 01:44:29.099277 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [5] ------ glance 2025-04-17 01:44:29.099292 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [5] ------ cinder 2025-04-17 01:44:29.099307 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [5] ------ nova 2025-04-17 01:44:29.099328 | orchestrator | 2025-04-17 01:44:29 | INFO  | A [4] ----- prometheus 2025-04-17 01:44:29.216554 | orchestrator | 2025-04-17 01:44:29 | INFO  | D [5] ------ grafana 2025-04-17 01:44:29.216688 | orchestrator | 2025-04-17 01:44:29 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-17 01:44:31.428765 | orchestrator | 2025-04-17 01:44:29 | INFO  | Tasks are running in the background 2025-04-17 01:44:31.428905 | orchestrator | 2025-04-17 01:44:31 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-17 01:44:33.507234 | orchestrator | 2025-04-17 01:44:33 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:33.507380 | orchestrator | 2025-04-17 01:44:33 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:33.507532 | orchestrator | 2025-04-17 01:44:33 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:33.508095 | orchestrator | 2025-04-17 01:44:33 | INFO  | Task 67c78fe9-6488-4c35-a194-f1fcdb4ceb78 is in state STARTED 2025-04-17 01:44:33.511264 | orchestrator | 2025-04-17 01:44:33 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:33.511558 | orchestrator | 2025-04-17 01:44:33 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:36.533707 | orchestrator | 2025-04-17 01:44:33 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:36.533838 | orchestrator | 2025-04-17 01:44:36 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:36.533959 | orchestrator | 2025-04-17 01:44:36 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:36.533988 | orchestrator | 2025-04-17 01:44:36 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:36.534464 | orchestrator | 2025-04-17 01:44:36 | INFO  | Task 67c78fe9-6488-4c35-a194-f1fcdb4ceb78 is in state STARTED 2025-04-17 01:44:36.535013 | orchestrator | 2025-04-17 01:44:36 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:36.536188 | orchestrator | 2025-04-17 01:44:36 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:39.589349 | orchestrator | 2025-04-17 01:44:36 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:39.589665 | orchestrator | 2025-04-17 01:44:39 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:39.589824 | orchestrator | 2025-04-17 01:44:39 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:39.589859 | orchestrator | 2025-04-17 01:44:39 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:39.589883 | orchestrator | 2025-04-17 01:44:39 | INFO  | Task 67c78fe9-6488-4c35-a194-f1fcdb4ceb78 is in state STARTED 2025-04-17 01:44:39.589908 | orchestrator | 2025-04-17 01:44:39 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:39.589929 | orchestrator | 2025-04-17 01:44:39 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:39.589950 | orchestrator | 2025-04-17 01:44:39 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:42.658711 | orchestrator | 2025-04-17 01:44:42 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:42.661008 | orchestrator | 2025-04-17 01:44:42 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:42.661082 | orchestrator | 2025-04-17 01:44:42 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:42.661111 | orchestrator | 2025-04-17 01:44:42 | INFO  | Task 67c78fe9-6488-4c35-a194-f1fcdb4ceb78 is in state STARTED 2025-04-17 01:44:42.666625 | orchestrator | 2025-04-17 01:44:42 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:45.716858 | orchestrator | 2025-04-17 01:44:42 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:45.716972 | orchestrator | 2025-04-17 01:44:42 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:45.717003 | orchestrator | 2025-04-17 01:44:45 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:45.720845 | orchestrator | 2025-04-17 01:44:45 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:45.725292 | orchestrator | 2025-04-17 01:44:45 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:45.733370 | orchestrator | 2025-04-17 01:44:45 | INFO  | Task 67c78fe9-6488-4c35-a194-f1fcdb4ceb78 is in state STARTED 2025-04-17 01:44:45.738750 | orchestrator | 2025-04-17 01:44:45 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:45.739860 | orchestrator | 2025-04-17 01:44:45 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:48.771125 | orchestrator | 2025-04-17 01:44:45 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:48.771249 | orchestrator | 2025-04-17 01:44:48 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:48.772694 | orchestrator | 2025-04-17 01:44:48 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:48.774673 | orchestrator | 2025-04-17 01:44:48 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:48.774930 | orchestrator | 2025-04-17 01:44:48 | INFO  | Task 67c78fe9-6488-4c35-a194-f1fcdb4ceb78 is in state SUCCESS 2025-04-17 01:44:48.775742 | orchestrator | 2025-04-17 01:44:48.775773 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-17 01:44:48.775788 | orchestrator | 2025-04-17 01:44:48.775802 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-17 01:44:48.775816 | orchestrator | Thursday 17 April 2025 01:44:36 +0000 (0:00:00.203) 0:00:00.203 ******** 2025-04-17 01:44:48.775830 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:44:48.775845 | orchestrator | changed: [testbed-manager] 2025-04-17 01:44:48.775858 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:44:48.775872 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:44:48.775886 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:44:48.775899 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:44:48.775913 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:44:48.775926 | orchestrator | 2025-04-17 01:44:48.775940 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-17 01:44:48.775954 | orchestrator | Thursday 17 April 2025 01:44:40 +0000 (0:00:03.289) 0:00:03.492 ******** 2025-04-17 01:44:48.775968 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-17 01:44:48.775983 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-17 01:44:48.776003 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-17 01:44:48.776017 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-17 01:44:48.776031 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-17 01:44:48.776044 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-17 01:44:48.776058 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-17 01:44:48.776072 | orchestrator | 2025-04-17 01:44:48.776086 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-17 01:44:48.776100 | orchestrator | Thursday 17 April 2025 01:44:42 +0000 (0:00:01.912) 0:00:05.404 ******** 2025-04-17 01:44:48.776116 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-17 01:44:40.870695', 'end': '2025-04-17 01:44:40.877380', 'delta': '0:00:00.006685', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-17 01:44:48.776139 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-17 01:44:40.815900', 'end': '2025-04-17 01:44:40.820734', 'delta': '0:00:00.004834', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-17 01:44:48.776172 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-17 01:44:41.197703', 'end': '2025-04-17 01:44:41.205987', 'delta': '0:00:00.008284', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-17 01:44:48.776211 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-17 01:44:41.382486', 'end': '2025-04-17 01:44:41.391199', 'delta': '0:00:00.008713', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-17 01:44:48.776227 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-17 01:44:41.491119', 'end': '2025-04-17 01:44:41.500485', 'delta': '0:00:00.009366', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-17 01:44:48.776241 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-17 01:44:41.612969', 'end': '2025-04-17 01:44:41.619678', 'delta': '0:00:00.006709', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-17 01:44:48.776261 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-17 01:44:41.761431', 'end': '2025-04-17 01:44:41.767875', 'delta': '0:00:00.006444', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-17 01:44:48.776282 | orchestrator | 2025-04-17 01:44:48.776297 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-17 01:44:48.776311 | orchestrator | Thursday 17 April 2025 01:44:44 +0000 (0:00:02.252) 0:00:07.657 ******** 2025-04-17 01:44:48.776325 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-17 01:44:48.776339 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-17 01:44:48.776353 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-17 01:44:48.776366 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-17 01:44:48.776401 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-17 01:44:48.776416 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-17 01:44:48.776430 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-17 01:44:48.776444 | orchestrator | 2025-04-17 01:44:48.776457 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:44:48.776471 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:44:48.776486 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:44:48.776500 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:44:48.776520 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:44:48.777332 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:44:48.777361 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:44:48.777376 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:44:48.777415 | orchestrator | 2025-04-17 01:44:48.777430 | orchestrator | Thursday 17 April 2025 01:44:46 +0000 (0:00:02.159) 0:00:09.816 ******** 2025-04-17 01:44:48.777444 | orchestrator | =============================================================================== 2025-04-17 01:44:48.777458 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.29s 2025-04-17 01:44:48.777472 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.25s 2025-04-17 01:44:48.777485 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.16s 2025-04-17 01:44:48.777499 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.91s 2025-04-17 01:44:48.777518 | orchestrator | 2025-04-17 01:44:48 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:44:48.778223 | orchestrator | 2025-04-17 01:44:48 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:48.779021 | orchestrator | 2025-04-17 01:44:48 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:51.820836 | orchestrator | 2025-04-17 01:44:48 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:51.821017 | orchestrator | 2025-04-17 01:44:51 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:51.821108 | orchestrator | 2025-04-17 01:44:51 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:51.821545 | orchestrator | 2025-04-17 01:44:51 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:51.824277 | orchestrator | 2025-04-17 01:44:51 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:44:51.824695 | orchestrator | 2025-04-17 01:44:51 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:51.825826 | orchestrator | 2025-04-17 01:44:51 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:54.865973 | orchestrator | 2025-04-17 01:44:51 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:54.866145 | orchestrator | 2025-04-17 01:44:54 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:54.867419 | orchestrator | 2025-04-17 01:44:54 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:54.868077 | orchestrator | 2025-04-17 01:44:54 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:54.870644 | orchestrator | 2025-04-17 01:44:54 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:44:54.871825 | orchestrator | 2025-04-17 01:44:54 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:54.872484 | orchestrator | 2025-04-17 01:44:54 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:44:57.924047 | orchestrator | 2025-04-17 01:44:54 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:44:57.924157 | orchestrator | 2025-04-17 01:44:57 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:44:57.925509 | orchestrator | 2025-04-17 01:44:57 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:44:57.925537 | orchestrator | 2025-04-17 01:44:57 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:44:57.925551 | orchestrator | 2025-04-17 01:44:57 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:44:57.925565 | orchestrator | 2025-04-17 01:44:57 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:44:57.925584 | orchestrator | 2025-04-17 01:44:57 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:45:00.975284 | orchestrator | 2025-04-17 01:44:57 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:00.975435 | orchestrator | 2025-04-17 01:45:00 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:00.976992 | orchestrator | 2025-04-17 01:45:00 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:00.980009 | orchestrator | 2025-04-17 01:45:00 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:00.982578 | orchestrator | 2025-04-17 01:45:00 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:00.986499 | orchestrator | 2025-04-17 01:45:00 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:00.987539 | orchestrator | 2025-04-17 01:45:00 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:45:04.046767 | orchestrator | 2025-04-17 01:45:00 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:04.046930 | orchestrator | 2025-04-17 01:45:04 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:04.051819 | orchestrator | 2025-04-17 01:45:04 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:04.054337 | orchestrator | 2025-04-17 01:45:04 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:04.054428 | orchestrator | 2025-04-17 01:45:04 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:04.054457 | orchestrator | 2025-04-17 01:45:04 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:04.054525 | orchestrator | 2025-04-17 01:45:04 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:45:04.054549 | orchestrator | 2025-04-17 01:45:04 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:07.126309 | orchestrator | 2025-04-17 01:45:07 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:07.130413 | orchestrator | 2025-04-17 01:45:07 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:07.132123 | orchestrator | 2025-04-17 01:45:07 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:07.134660 | orchestrator | 2025-04-17 01:45:07 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:07.137165 | orchestrator | 2025-04-17 01:45:07 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:10.194334 | orchestrator | 2025-04-17 01:45:07 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state STARTED 2025-04-17 01:45:10.194535 | orchestrator | 2025-04-17 01:45:07 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:10.194575 | orchestrator | 2025-04-17 01:45:10 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:13.241177 | orchestrator | 2025-04-17 01:45:10 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:13.241311 | orchestrator | 2025-04-17 01:45:10 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:13.241332 | orchestrator | 2025-04-17 01:45:10 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:13.241347 | orchestrator | 2025-04-17 01:45:10 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:13.241361 | orchestrator | 2025-04-17 01:45:10 | INFO  | Task 02ed3ce5-8f99-4f78-a76f-b09164d9f5fc is in state SUCCESS 2025-04-17 01:45:13.241495 | orchestrator | 2025-04-17 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:13.241527 | orchestrator | 2025-04-17 01:45:13 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:13.243507 | orchestrator | 2025-04-17 01:45:13 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:13.245699 | orchestrator | 2025-04-17 01:45:13 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:13.245725 | orchestrator | 2025-04-17 01:45:13 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:13.245745 | orchestrator | 2025-04-17 01:45:13 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:13.249465 | orchestrator | 2025-04-17 01:45:13 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:16.292563 | orchestrator | 2025-04-17 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:16.292704 | orchestrator | 2025-04-17 01:45:16 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:16.292884 | orchestrator | 2025-04-17 01:45:16 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:16.296748 | orchestrator | 2025-04-17 01:45:16 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:16.303685 | orchestrator | 2025-04-17 01:45:16 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:16.308226 | orchestrator | 2025-04-17 01:45:16 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:16.308267 | orchestrator | 2025-04-17 01:45:16 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:19.349594 | orchestrator | 2025-04-17 01:45:16 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:19.349749 | orchestrator | 2025-04-17 01:45:19 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:19.370925 | orchestrator | 2025-04-17 01:45:19 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:19.378221 | orchestrator | 2025-04-17 01:45:19 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:22.450325 | orchestrator | 2025-04-17 01:45:19 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:22.450472 | orchestrator | 2025-04-17 01:45:19 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:22.450493 | orchestrator | 2025-04-17 01:45:19 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:22.450509 | orchestrator | 2025-04-17 01:45:19 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:22.450558 | orchestrator | 2025-04-17 01:45:22 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:22.451953 | orchestrator | 2025-04-17 01:45:22 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:22.453564 | orchestrator | 2025-04-17 01:45:22 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:22.459347 | orchestrator | 2025-04-17 01:45:22 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:22.460494 | orchestrator | 2025-04-17 01:45:22 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:22.460531 | orchestrator | 2025-04-17 01:45:22 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state STARTED 2025-04-17 01:45:25.496976 | orchestrator | 2025-04-17 01:45:22 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:25.497104 | orchestrator | 2025-04-17 01:45:25 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:25.497543 | orchestrator | 2025-04-17 01:45:25 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:25.498700 | orchestrator | 2025-04-17 01:45:25 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:25.498996 | orchestrator | 2025-04-17 01:45:25 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:25.499702 | orchestrator | 2025-04-17 01:45:25 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:25.500225 | orchestrator | 2025-04-17 01:45:25 | INFO  | Task 10102133-0566-4bf9-b5d6-ad4cda836344 is in state SUCCESS 2025-04-17 01:45:28.552493 | orchestrator | 2025-04-17 01:45:25 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:28.552630 | orchestrator | 2025-04-17 01:45:28 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:28.553060 | orchestrator | 2025-04-17 01:45:28 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:28.556184 | orchestrator | 2025-04-17 01:45:28 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:28.557140 | orchestrator | 2025-04-17 01:45:28 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:28.557760 | orchestrator | 2025-04-17 01:45:28 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:28.558188 | orchestrator | 2025-04-17 01:45:28 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:31.593523 | orchestrator | 2025-04-17 01:45:31 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:31.597961 | orchestrator | 2025-04-17 01:45:31 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:31.598008 | orchestrator | 2025-04-17 01:45:31 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:31.599230 | orchestrator | 2025-04-17 01:45:31 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:31.599775 | orchestrator | 2025-04-17 01:45:31 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:31.600051 | orchestrator | 2025-04-17 01:45:31 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:34.675190 | orchestrator | 2025-04-17 01:45:34 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:34.676711 | orchestrator | 2025-04-17 01:45:34 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:34.676757 | orchestrator | 2025-04-17 01:45:34 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:37.710286 | orchestrator | 2025-04-17 01:45:34 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:37.710437 | orchestrator | 2025-04-17 01:45:34 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:37.710459 | orchestrator | 2025-04-17 01:45:34 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:37.710490 | orchestrator | 2025-04-17 01:45:37 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:37.710580 | orchestrator | 2025-04-17 01:45:37 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:37.710966 | orchestrator | 2025-04-17 01:45:37 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:37.711432 | orchestrator | 2025-04-17 01:45:37 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:37.711976 | orchestrator | 2025-04-17 01:45:37 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:37.712628 | orchestrator | 2025-04-17 01:45:37 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:40.761571 | orchestrator | 2025-04-17 01:45:40 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:40.763146 | orchestrator | 2025-04-17 01:45:40 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:40.764618 | orchestrator | 2025-04-17 01:45:40 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:40.765732 | orchestrator | 2025-04-17 01:45:40 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:40.767234 | orchestrator | 2025-04-17 01:45:40 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:43.810298 | orchestrator | 2025-04-17 01:45:40 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:43.810450 | orchestrator | 2025-04-17 01:45:43 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:43.811065 | orchestrator | 2025-04-17 01:45:43 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:43.811101 | orchestrator | 2025-04-17 01:45:43 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:43.811435 | orchestrator | 2025-04-17 01:45:43 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:43.814659 | orchestrator | 2025-04-17 01:45:43 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:46.852133 | orchestrator | 2025-04-17 01:45:43 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:46.852311 | orchestrator | 2025-04-17 01:45:46 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:46.852443 | orchestrator | 2025-04-17 01:45:46 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:46.854203 | orchestrator | 2025-04-17 01:45:46 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state STARTED 2025-04-17 01:45:46.854856 | orchestrator | 2025-04-17 01:45:46 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:46.855536 | orchestrator | 2025-04-17 01:45:46 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state STARTED 2025-04-17 01:45:46.855605 | orchestrator | 2025-04-17 01:45:46 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:49.898682 | orchestrator | 2025-04-17 01:45:49 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:49.900079 | orchestrator | 2025-04-17 01:45:49 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:49.901503 | orchestrator | 2025-04-17 01:45:49 | INFO  | Task 9ccdf46d-e576-400f-82a9-17077aca1c24 is in state SUCCESS 2025-04-17 01:45:49.902793 | orchestrator | 2025-04-17 01:45:49.902837 | orchestrator | 2025-04-17 01:45:49.902852 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-17 01:45:49.902866 | orchestrator | 2025-04-17 01:45:49.902881 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-17 01:45:49.902902 | orchestrator | Thursday 17 April 2025 01:44:36 +0000 (0:00:00.228) 0:00:00.228 ******** 2025-04-17 01:45:49.902917 | orchestrator | ok: [testbed-manager] => { 2025-04-17 01:45:49.902933 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-17 01:45:49.902948 | orchestrator | } 2025-04-17 01:45:49.902962 | orchestrator | 2025-04-17 01:45:49.902975 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-17 01:45:49.902989 | orchestrator | Thursday 17 April 2025 01:44:36 +0000 (0:00:00.287) 0:00:00.516 ******** 2025-04-17 01:45:49.903003 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.903017 | orchestrator | 2025-04-17 01:45:49.903069 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-17 01:45:49.903084 | orchestrator | Thursday 17 April 2025 01:44:37 +0000 (0:00:01.367) 0:00:01.884 ******** 2025-04-17 01:45:49.903098 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-17 01:45:49.903112 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-17 01:45:49.903126 | orchestrator | 2025-04-17 01:45:49.903140 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-17 01:45:49.903154 | orchestrator | Thursday 17 April 2025 01:44:38 +0000 (0:00:00.843) 0:00:02.727 ******** 2025-04-17 01:45:49.903185 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.903199 | orchestrator | 2025-04-17 01:45:49.903213 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-17 01:45:49.903226 | orchestrator | Thursday 17 April 2025 01:44:40 +0000 (0:00:01.913) 0:00:04.641 ******** 2025-04-17 01:45:49.903240 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.903254 | orchestrator | 2025-04-17 01:45:49.903267 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-17 01:45:49.903281 | orchestrator | Thursday 17 April 2025 01:44:41 +0000 (0:00:01.375) 0:00:06.017 ******** 2025-04-17 01:45:49.903294 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-17 01:45:49.903308 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.903321 | orchestrator | 2025-04-17 01:45:49.903335 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-17 01:45:49.903412 | orchestrator | Thursday 17 April 2025 01:45:06 +0000 (0:00:24.694) 0:00:30.711 ******** 2025-04-17 01:45:49.903429 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.903446 | orchestrator | 2025-04-17 01:45:49.903462 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:45:49.903478 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.903496 | orchestrator | 2025-04-17 01:45:49.903512 | orchestrator | Thursday 17 April 2025 01:45:08 +0000 (0:00:02.180) 0:00:32.892 ******** 2025-04-17 01:45:49.903527 | orchestrator | =============================================================================== 2025-04-17 01:45:49.903543 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.69s 2025-04-17 01:45:49.903558 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.18s 2025-04-17 01:45:49.903573 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.91s 2025-04-17 01:45:49.903588 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.38s 2025-04-17 01:45:49.903604 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.37s 2025-04-17 01:45:49.903625 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.84s 2025-04-17 01:45:49.903640 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.29s 2025-04-17 01:45:49.903656 | orchestrator | 2025-04-17 01:45:49.903671 | orchestrator | 2025-04-17 01:45:49.903686 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-17 01:45:49.903702 | orchestrator | 2025-04-17 01:45:49.903717 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-17 01:45:49.903732 | orchestrator | Thursday 17 April 2025 01:44:36 +0000 (0:00:00.233) 0:00:00.233 ******** 2025-04-17 01:45:49.903747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-17 01:45:49.903764 | orchestrator | 2025-04-17 01:45:49.903779 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-17 01:45:49.903793 | orchestrator | Thursday 17 April 2025 01:44:36 +0000 (0:00:00.393) 0:00:00.627 ******** 2025-04-17 01:45:49.903806 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-17 01:45:49.903820 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-17 01:45:49.903834 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-17 01:45:49.903848 | orchestrator | 2025-04-17 01:45:49.903862 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-17 01:45:49.903875 | orchestrator | Thursday 17 April 2025 01:44:38 +0000 (0:00:01.212) 0:00:01.839 ******** 2025-04-17 01:45:49.903889 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.903902 | orchestrator | 2025-04-17 01:45:49.903916 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-17 01:45:49.903938 | orchestrator | Thursday 17 April 2025 01:44:39 +0000 (0:00:01.065) 0:00:02.904 ******** 2025-04-17 01:45:49.903952 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-17 01:45:49.903966 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.903980 | orchestrator | 2025-04-17 01:45:49.904001 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-17 01:45:49.904014 | orchestrator | Thursday 17 April 2025 01:45:16 +0000 (0:00:37.648) 0:00:40.553 ******** 2025-04-17 01:45:49.904026 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.904039 | orchestrator | 2025-04-17 01:45:49.904051 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-17 01:45:49.904063 | orchestrator | Thursday 17 April 2025 01:45:17 +0000 (0:00:00.875) 0:00:41.428 ******** 2025-04-17 01:45:49.904076 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.904088 | orchestrator | 2025-04-17 01:45:49.904100 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-17 01:45:49.904112 | orchestrator | Thursday 17 April 2025 01:45:18 +0000 (0:00:00.709) 0:00:42.137 ******** 2025-04-17 01:45:49.904124 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.904136 | orchestrator | 2025-04-17 01:45:49.904148 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-17 01:45:49.904160 | orchestrator | Thursday 17 April 2025 01:45:20 +0000 (0:00:01.896) 0:00:44.034 ******** 2025-04-17 01:45:49.904172 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.904184 | orchestrator | 2025-04-17 01:45:49.904196 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-17 01:45:49.904209 | orchestrator | Thursday 17 April 2025 01:45:21 +0000 (0:00:00.777) 0:00:44.811 ******** 2025-04-17 01:45:49.904220 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.904233 | orchestrator | 2025-04-17 01:45:49.904245 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-17 01:45:49.904257 | orchestrator | Thursday 17 April 2025 01:45:21 +0000 (0:00:00.720) 0:00:45.532 ******** 2025-04-17 01:45:49.904268 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.904280 | orchestrator | 2025-04-17 01:45:49.904292 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:45:49.904305 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.904317 | orchestrator | 2025-04-17 01:45:49.904329 | orchestrator | Thursday 17 April 2025 01:45:22 +0000 (0:00:00.404) 0:00:45.937 ******** 2025-04-17 01:45:49.904341 | orchestrator | =============================================================================== 2025-04-17 01:45:49.904368 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.65s 2025-04-17 01:45:49.904380 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.90s 2025-04-17 01:45:49.904392 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.21s 2025-04-17 01:45:49.904404 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.07s 2025-04-17 01:45:49.904416 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.88s 2025-04-17 01:45:49.904433 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2025-04-17 01:45:49.904446 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.72s 2025-04-17 01:45:49.904458 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.71s 2025-04-17 01:45:49.904470 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2025-04-17 01:45:49.904483 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.39s 2025-04-17 01:45:49.904495 | orchestrator | 2025-04-17 01:45:49.904507 | orchestrator | 2025-04-17 01:45:49.904519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:45:49.904531 | orchestrator | 2025-04-17 01:45:49.904543 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:45:49.904561 | orchestrator | Thursday 17 April 2025 01:44:36 +0000 (0:00:00.240) 0:00:00.240 ******** 2025-04-17 01:45:49.904574 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-17 01:45:49.904586 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-17 01:45:49.904598 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-17 01:45:49.904610 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-17 01:45:49.904622 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-17 01:45:49.904635 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-17 01:45:49.904647 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-17 01:45:49.904659 | orchestrator | 2025-04-17 01:45:49.904671 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-17 01:45:49.904684 | orchestrator | 2025-04-17 01:45:49.904696 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-17 01:45:49.904708 | orchestrator | Thursday 17 April 2025 01:44:37 +0000 (0:00:01.174) 0:00:01.414 ******** 2025-04-17 01:45:49.904731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:45:49.904745 | orchestrator | 2025-04-17 01:45:49.904757 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-17 01:45:49.904770 | orchestrator | Thursday 17 April 2025 01:44:39 +0000 (0:00:01.807) 0:00:03.222 ******** 2025-04-17 01:45:49.904782 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:45:49.904794 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:45:49.904806 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.904819 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:45:49.904831 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:45:49.904843 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:45:49.904855 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:45:49.904867 | orchestrator | 2025-04-17 01:45:49.904879 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-17 01:45:49.904897 | orchestrator | Thursday 17 April 2025 01:44:41 +0000 (0:00:02.137) 0:00:05.359 ******** 2025-04-17 01:45:49.904910 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:45:49.904922 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.904934 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:45:49.904946 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:45:49.904958 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:45:49.904971 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:45:49.904983 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:45:49.904995 | orchestrator | 2025-04-17 01:45:49.905008 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-17 01:45:49.905020 | orchestrator | Thursday 17 April 2025 01:44:45 +0000 (0:00:03.446) 0:00:08.806 ******** 2025-04-17 01:45:49.905032 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.905045 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:45:49.905057 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:45:49.905069 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:45:49.905086 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:45:49.905099 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:45:49.905111 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:45:49.905123 | orchestrator | 2025-04-17 01:45:49.905135 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-17 01:45:49.905148 | orchestrator | Thursday 17 April 2025 01:44:47 +0000 (0:00:01.921) 0:00:10.728 ******** 2025-04-17 01:45:49.905160 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.905172 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:45:49.905185 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:45:49.905197 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:45:49.905214 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:45:49.905227 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:45:49.905239 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:45:49.905251 | orchestrator | 2025-04-17 01:45:49.905264 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-17 01:45:49.905276 | orchestrator | Thursday 17 April 2025 01:44:57 +0000 (0:00:09.921) 0:00:20.650 ******** 2025-04-17 01:45:49.905288 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:45:49.905300 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:45:49.905312 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.905325 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:45:49.905337 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:45:49.905368 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:45:49.905380 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:45:49.905393 | orchestrator | 2025-04-17 01:45:49.905405 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-17 01:45:49.905417 | orchestrator | Thursday 17 April 2025 01:45:28 +0000 (0:00:31.588) 0:00:52.239 ******** 2025-04-17 01:45:49.905430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:45:49.905447 | orchestrator | 2025-04-17 01:45:49.905459 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-17 01:45:49.905471 | orchestrator | Thursday 17 April 2025 01:45:30 +0000 (0:00:01.560) 0:00:53.799 ******** 2025-04-17 01:45:49.905483 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-17 01:45:49.905495 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-17 01:45:49.905507 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-17 01:45:49.905519 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-17 01:45:49.905532 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-17 01:45:49.905544 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-17 01:45:49.905556 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-17 01:45:49.905568 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-17 01:45:49.905580 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-17 01:45:49.905592 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-17 01:45:49.905604 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-17 01:45:49.905616 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-17 01:45:49.905628 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-17 01:45:49.905640 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-17 01:45:49.905653 | orchestrator | 2025-04-17 01:45:49.905665 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-17 01:45:49.905677 | orchestrator | Thursday 17 April 2025 01:45:35 +0000 (0:00:04.896) 0:00:58.696 ******** 2025-04-17 01:45:49.905690 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.905702 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:45:49.905714 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:45:49.905726 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:45:49.905738 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:45:49.905750 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:45:49.905763 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:45:49.905775 | orchestrator | 2025-04-17 01:45:49.905787 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-17 01:45:49.905799 | orchestrator | Thursday 17 April 2025 01:45:36 +0000 (0:00:01.174) 0:00:59.871 ******** 2025-04-17 01:45:49.905811 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:45:49.905824 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.905836 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:45:49.905855 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:45:49.905867 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:45:49.905879 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:45:49.905891 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:45:49.905903 | orchestrator | 2025-04-17 01:45:49.905915 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-17 01:45:49.905928 | orchestrator | Thursday 17 April 2025 01:45:38 +0000 (0:00:01.839) 0:01:01.710 ******** 2025-04-17 01:45:49.905940 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.905952 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:45:49.905964 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:45:49.905977 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:45:49.905994 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:45:49.906007 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:45:49.906061 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:45:49.906076 | orchestrator | 2025-04-17 01:45:49.906089 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-17 01:45:49.906106 | orchestrator | Thursday 17 April 2025 01:45:39 +0000 (0:00:01.096) 0:01:02.807 ******** 2025-04-17 01:45:49.906119 | orchestrator | ok: [testbed-manager] 2025-04-17 01:45:49.906132 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:45:49.906144 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:45:49.906156 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:45:49.906168 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:45:49.906181 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:45:49.906193 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:45:49.906205 | orchestrator | 2025-04-17 01:45:49.906217 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-17 01:45:49.906229 | orchestrator | Thursday 17 April 2025 01:45:41 +0000 (0:00:02.078) 0:01:04.886 ******** 2025-04-17 01:45:49.906242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-17 01:45:49.906256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:45:49.906268 | orchestrator | 2025-04-17 01:45:49.906281 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-17 01:45:49.906293 | orchestrator | Thursday 17 April 2025 01:45:42 +0000 (0:00:01.625) 0:01:06.511 ******** 2025-04-17 01:45:49.906305 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.906318 | orchestrator | 2025-04-17 01:45:49.906330 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-17 01:45:49.906342 | orchestrator | Thursday 17 April 2025 01:45:45 +0000 (0:00:02.403) 0:01:08.915 ******** 2025-04-17 01:45:49.906369 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:45:49.906382 | orchestrator | changed: [testbed-manager] 2025-04-17 01:45:49.906394 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:45:49.906406 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:45:49.906418 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:45:49.906431 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:45:49.906451 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:45:49.906465 | orchestrator | 2025-04-17 01:45:49.906477 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:45:49.906490 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.906503 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.906516 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.906533 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.906552 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.906565 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.906577 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:45:49.906590 | orchestrator | 2025-04-17 01:45:49.906602 | orchestrator | Thursday 17 April 2025 01:45:48 +0000 (0:00:03.112) 0:01:12.028 ******** 2025-04-17 01:45:49.906615 | orchestrator | =============================================================================== 2025-04-17 01:45:49.906627 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 31.59s 2025-04-17 01:45:49.906639 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.92s 2025-04-17 01:45:49.906652 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.90s 2025-04-17 01:45:49.906664 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.45s 2025-04-17 01:45:49.906677 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.11s 2025-04-17 01:45:49.906689 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.40s 2025-04-17 01:45:49.906702 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.14s 2025-04-17 01:45:49.906714 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.08s 2025-04-17 01:45:49.906726 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.92s 2025-04-17 01:45:49.906739 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.84s 2025-04-17 01:45:49.906751 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.81s 2025-04-17 01:45:49.906764 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.63s 2025-04-17 01:45:49.906776 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.56s 2025-04-17 01:45:49.906788 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.17s 2025-04-17 01:45:49.906807 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.17s 2025-04-17 01:45:52.938286 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.10s 2025-04-17 01:45:52.938442 | orchestrator | 2025-04-17 01:45:49 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:52.938465 | orchestrator | 2025-04-17 01:45:49 | INFO  | Task 15808763-b653-419b-b0d6-762432781d62 is in state SUCCESS 2025-04-17 01:45:52.938481 | orchestrator | 2025-04-17 01:45:49 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:52.938511 | orchestrator | 2025-04-17 01:45:52 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:52.939855 | orchestrator | 2025-04-17 01:45:52 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:52.941121 | orchestrator | 2025-04-17 01:45:52 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:55.979520 | orchestrator | 2025-04-17 01:45:52 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:55.979671 | orchestrator | 2025-04-17 01:45:55 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:55.979804 | orchestrator | 2025-04-17 01:45:55 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:55.982894 | orchestrator | 2025-04-17 01:45:55 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:59.040536 | orchestrator | 2025-04-17 01:45:55 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:45:59.040721 | orchestrator | 2025-04-17 01:45:59 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:45:59.043634 | orchestrator | 2025-04-17 01:45:59 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:45:59.045148 | orchestrator | 2025-04-17 01:45:59 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:45:59.045425 | orchestrator | 2025-04-17 01:45:59 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:02.085663 | orchestrator | 2025-04-17 01:46:02 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:02.087723 | orchestrator | 2025-04-17 01:46:02 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:02.090273 | orchestrator | 2025-04-17 01:46:02 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:05.135317 | orchestrator | 2025-04-17 01:46:02 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:05.135485 | orchestrator | 2025-04-17 01:46:05 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:05.136093 | orchestrator | 2025-04-17 01:46:05 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:05.137412 | orchestrator | 2025-04-17 01:46:05 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:05.137497 | orchestrator | 2025-04-17 01:46:05 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:08.187820 | orchestrator | 2025-04-17 01:46:08 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:08.190394 | orchestrator | 2025-04-17 01:46:08 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:08.190462 | orchestrator | 2025-04-17 01:46:08 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:08.190496 | orchestrator | 2025-04-17 01:46:08 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:11.226918 | orchestrator | 2025-04-17 01:46:11 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:11.230170 | orchestrator | 2025-04-17 01:46:11 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:11.234466 | orchestrator | 2025-04-17 01:46:11 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:14.275822 | orchestrator | 2025-04-17 01:46:11 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:14.275974 | orchestrator | 2025-04-17 01:46:14 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:14.277557 | orchestrator | 2025-04-17 01:46:14 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:14.277991 | orchestrator | 2025-04-17 01:46:14 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:14.279893 | orchestrator | 2025-04-17 01:46:14 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:17.323517 | orchestrator | 2025-04-17 01:46:17 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:17.323713 | orchestrator | 2025-04-17 01:46:17 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:17.323743 | orchestrator | 2025-04-17 01:46:17 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:20.364761 | orchestrator | 2025-04-17 01:46:17 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:20.364941 | orchestrator | 2025-04-17 01:46:20 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:20.366781 | orchestrator | 2025-04-17 01:46:20 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:20.371043 | orchestrator | 2025-04-17 01:46:20 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:20.371317 | orchestrator | 2025-04-17 01:46:20 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:23.439751 | orchestrator | 2025-04-17 01:46:23 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:23.441108 | orchestrator | 2025-04-17 01:46:23 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:23.442855 | orchestrator | 2025-04-17 01:46:23 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:23.442880 | orchestrator | 2025-04-17 01:46:23 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:26.488236 | orchestrator | 2025-04-17 01:46:26 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:26.489559 | orchestrator | 2025-04-17 01:46:26 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:26.491223 | orchestrator | 2025-04-17 01:46:26 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:26.491309 | orchestrator | 2025-04-17 01:46:26 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:29.533041 | orchestrator | 2025-04-17 01:46:29 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:29.534674 | orchestrator | 2025-04-17 01:46:29 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:29.536599 | orchestrator | 2025-04-17 01:46:29 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state STARTED 2025-04-17 01:46:32.576944 | orchestrator | 2025-04-17 01:46:29 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:32.577099 | orchestrator | 2025-04-17 01:46:32 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:32.580399 | orchestrator | 2025-04-17 01:46:32 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:32.581849 | orchestrator | 2025-04-17 01:46:32 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:32.581954 | orchestrator | 2025-04-17 01:46:32 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:32.582001 | orchestrator | 2025-04-17 01:46:32 | INFO  | Task 92c53b8f-623d-414b-aedb-217ff6fab2ca is in state SUCCESS 2025-04-17 01:46:32.583748 | orchestrator | 2025-04-17 01:46:32.583808 | orchestrator | 2025-04-17 01:46:32.583824 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-17 01:46:32.583840 | orchestrator | 2025-04-17 01:46:32.583854 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-17 01:46:32.583868 | orchestrator | Thursday 17 April 2025 01:44:51 +0000 (0:00:00.141) 0:00:00.141 ******** 2025-04-17 01:46:32.583883 | orchestrator | ok: [testbed-manager] 2025-04-17 01:46:32.583901 | orchestrator | 2025-04-17 01:46:32.583915 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-17 01:46:32.583929 | orchestrator | Thursday 17 April 2025 01:44:52 +0000 (0:00:00.888) 0:00:01.030 ******** 2025-04-17 01:46:32.583944 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-17 01:46:32.583958 | orchestrator | 2025-04-17 01:46:32.583972 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-17 01:46:32.583986 | orchestrator | Thursday 17 April 2025 01:44:52 +0000 (0:00:00.536) 0:00:01.566 ******** 2025-04-17 01:46:32.584021 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.584036 | orchestrator | 2025-04-17 01:46:32.584058 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-17 01:46:32.584072 | orchestrator | Thursday 17 April 2025 01:44:54 +0000 (0:00:01.528) 0:00:03.095 ******** 2025-04-17 01:46:32.584086 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-17 01:46:32.584100 | orchestrator | ok: [testbed-manager] 2025-04-17 01:46:32.584114 | orchestrator | 2025-04-17 01:46:32.584128 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-17 01:46:32.584142 | orchestrator | Thursday 17 April 2025 01:45:44 +0000 (0:00:50.077) 0:00:53.173 ******** 2025-04-17 01:46:32.584155 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.584169 | orchestrator | 2025-04-17 01:46:32.584183 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:46:32.584197 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:46:32.584212 | orchestrator | 2025-04-17 01:46:32.584226 | orchestrator | Thursday 17 April 2025 01:45:47 +0000 (0:00:03.433) 0:00:56.606 ******** 2025-04-17 01:46:32.584240 | orchestrator | =============================================================================== 2025-04-17 01:46:32.584254 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 50.08s 2025-04-17 01:46:32.584275 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.43s 2025-04-17 01:46:32.584289 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.53s 2025-04-17 01:46:32.584302 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.89s 2025-04-17 01:46:32.584317 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.54s 2025-04-17 01:46:32.584357 | orchestrator | 2025-04-17 01:46:32.584373 | orchestrator | 2025-04-17 01:46:32.584388 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-17 01:46:32.584403 | orchestrator | 2025-04-17 01:46:32.584419 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-17 01:46:32.584434 | orchestrator | Thursday 17 April 2025 01:44:32 +0000 (0:00:00.426) 0:00:00.426 ******** 2025-04-17 01:46:32.584449 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:46:32.584465 | orchestrator | 2025-04-17 01:46:32.584481 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-17 01:46:32.584504 | orchestrator | Thursday 17 April 2025 01:44:33 +0000 (0:00:01.257) 0:00:01.684 ******** 2025-04-17 01:46:32.584519 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-17 01:46:32.584535 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-17 01:46:32.584550 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-17 01:46:32.584571 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-17 01:46:32.584587 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-17 01:46:32.584602 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-17 01:46:32.584617 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-17 01:46:32.584633 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-17 01:46:32.584648 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-17 01:46:32.584664 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-17 01:46:32.584678 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-17 01:46:32.584691 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-17 01:46:32.584715 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-17 01:46:32.584729 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-17 01:46:32.584743 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-17 01:46:32.584757 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-17 01:46:32.584771 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-17 01:46:32.584795 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-17 01:46:32.584812 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-17 01:46:32.584835 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-17 01:46:32.584860 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-17 01:46:32.584882 | orchestrator | 2025-04-17 01:46:32.584908 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-17 01:46:32.584923 | orchestrator | Thursday 17 April 2025 01:44:37 +0000 (0:00:03.440) 0:00:05.125 ******** 2025-04-17 01:46:32.584937 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:46:32.584958 | orchestrator | 2025-04-17 01:46:32.584972 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-17 01:46:32.584985 | orchestrator | Thursday 17 April 2025 01:44:38 +0000 (0:00:01.579) 0:00:06.704 ******** 2025-04-17 01:46:32.585003 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.585023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.585038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.585053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.585115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.585131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.585155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.585171 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585284 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.585460 | orchestrator | 2025-04-17 01:46:32.585474 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-17 01:46:32.585488 | orchestrator | Thursday 17 April 2025 01:44:43 +0000 (0:00:04.767) 0:00:11.472 ******** 2025-04-17 01:46:32.585509 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.585525 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.585574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585610 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:46:32.585625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.585679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.585724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585738 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:46:32.585752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585773 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:46:32.585788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.585802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.585852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585881 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:46:32.585895 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:46:32.585909 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:46:32.585923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.585945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.585973 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:46:32.585987 | orchestrator | 2025-04-17 01:46:32.586001 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-17 01:46:32.586079 | orchestrator | Thursday 17 April 2025 01:44:45 +0000 (0:00:01.705) 0:00:13.177 ******** 2025-04-17 01:46:32.586098 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.586121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.586818 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.586910 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.586964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.586991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587008 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:46:32.587028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.587044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.587110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587150 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:46:32.587165 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:46:32.587181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.587202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.587260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587276 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:46:32.587292 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:46:32.587308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587358 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:46:32.587375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-17 01:46:32.587391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.587423 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:46:32.587438 | orchestrator | 2025-04-17 01:46:32.587455 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-17 01:46:32.587471 | orchestrator | Thursday 17 April 2025 01:44:47 +0000 (0:00:02.414) 0:00:15.592 ******** 2025-04-17 01:46:32.587487 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:46:32.587503 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:46:32.587518 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:46:32.587534 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:46:32.587549 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:46:32.587564 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:46:32.587579 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:46:32.587594 | orchestrator | 2025-04-17 01:46:32.587610 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-17 01:46:32.587625 | orchestrator | Thursday 17 April 2025 01:44:48 +0000 (0:00:01.013) 0:00:16.605 ******** 2025-04-17 01:46:32.587641 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:46:32.587656 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:46:32.587671 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:46:32.587686 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:46:32.587702 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:46:32.587716 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:46:32.587730 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:46:32.587743 | orchestrator | 2025-04-17 01:46:32.587757 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-17 01:46:32.587771 | orchestrator | Thursday 17 April 2025 01:44:49 +0000 (0:00:00.951) 0:00:17.556 ******** 2025-04-17 01:46:32.587785 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:46:32.587799 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.587813 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.587826 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.587840 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.587854 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.587867 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.587887 | orchestrator | 2025-04-17 01:46:32.587902 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-17 01:46:32.587915 | orchestrator | Thursday 17 April 2025 01:45:20 +0000 (0:00:30.850) 0:00:48.407 ******** 2025-04-17 01:46:32.587929 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:46:32.587949 | orchestrator | ok: [testbed-manager] 2025-04-17 01:46:32.587964 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:46:32.587978 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:46:32.587992 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:46:32.588005 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:46:32.588019 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:46:32.588033 | orchestrator | 2025-04-17 01:46:32.588046 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-17 01:46:32.588061 | orchestrator | Thursday 17 April 2025 01:45:23 +0000 (0:00:02.546) 0:00:50.954 ******** 2025-04-17 01:46:32.588074 | orchestrator | ok: [testbed-manager] 2025-04-17 01:46:32.588088 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:46:32.588101 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:46:32.588122 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:46:32.588136 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:46:32.588149 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:46:32.588163 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:46:32.588176 | orchestrator | 2025-04-17 01:46:32.588190 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-17 01:46:32.588204 | orchestrator | Thursday 17 April 2025 01:45:24 +0000 (0:00:01.000) 0:00:51.955 ******** 2025-04-17 01:46:32.588218 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:46:32.588232 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:46:32.588246 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:46:32.588259 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:46:32.588273 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:46:32.588286 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:46:32.588300 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:46:32.588314 | orchestrator | 2025-04-17 01:46:32.588346 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-17 01:46:32.588361 | orchestrator | Thursday 17 April 2025 01:45:24 +0000 (0:00:00.895) 0:00:52.850 ******** 2025-04-17 01:46:32.588375 | orchestrator | skipping: [testbed-manager] 2025-04-17 01:46:32.588388 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:46:32.588402 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:46:32.588416 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:46:32.588429 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:46:32.588443 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:46:32.588456 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:46:32.588470 | orchestrator | 2025-04-17 01:46:32.588484 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-17 01:46:32.588498 | orchestrator | Thursday 17 April 2025 01:45:25 +0000 (0:00:00.717) 0:00:53.568 ******** 2025-04-17 01:46:32.588513 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.588527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.588554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.588569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.588591 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.588606 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.588635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.588689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588723 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588752 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.588892 | orchestrator | 2025-04-17 01:46:32.588906 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-17 01:46:32.588920 | orchestrator | Thursday 17 April 2025 01:45:29 +0000 (0:00:04.105) 0:00:57.674 ******** 2025-04-17 01:46:32.588934 | orchestrator | [WARNING]: Skipped 2025-04-17 01:46:32.588948 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-17 01:46:32.588963 | orchestrator | to this access issue: 2025-04-17 01:46:32.588977 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-17 01:46:32.588991 | orchestrator | directory 2025-04-17 01:46:32.589005 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-17 01:46:32.589019 | orchestrator | 2025-04-17 01:46:32.589032 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-17 01:46:32.589046 | orchestrator | Thursday 17 April 2025 01:45:30 +0000 (0:00:01.102) 0:00:58.777 ******** 2025-04-17 01:46:32.589060 | orchestrator | [WARNING]: Skipped 2025-04-17 01:46:32.589074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-17 01:46:32.589088 | orchestrator | to this access issue: 2025-04-17 01:46:32.589102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-17 01:46:32.589115 | orchestrator | directory 2025-04-17 01:46:32.589129 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-17 01:46:32.589143 | orchestrator | 2025-04-17 01:46:32.589163 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-17 01:46:32.589183 | orchestrator | Thursday 17 April 2025 01:45:31 +0000 (0:00:00.539) 0:00:59.316 ******** 2025-04-17 01:46:32.589198 | orchestrator | [WARNING]: Skipped 2025-04-17 01:46:32.589211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-17 01:46:32.589225 | orchestrator | to this access issue: 2025-04-17 01:46:32.589239 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-17 01:46:32.589253 | orchestrator | directory 2025-04-17 01:46:32.589267 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-17 01:46:32.589281 | orchestrator | 2025-04-17 01:46:32.589295 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-17 01:46:32.589309 | orchestrator | Thursday 17 April 2025 01:45:31 +0000 (0:00:00.479) 0:00:59.795 ******** 2025-04-17 01:46:32.589369 | orchestrator | [WARNING]: Skipped 2025-04-17 01:46:32.589385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-17 01:46:32.589400 | orchestrator | to this access issue: 2025-04-17 01:46:32.589413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-17 01:46:32.589427 | orchestrator | directory 2025-04-17 01:46:32.589441 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-17 01:46:32.589455 | orchestrator | 2025-04-17 01:46:32.589469 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-17 01:46:32.589483 | orchestrator | Thursday 17 April 2025 01:45:32 +0000 (0:00:00.586) 0:01:00.382 ******** 2025-04-17 01:46:32.589496 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.589510 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:46:32.589524 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.589538 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.589552 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.589565 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.589579 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.589593 | orchestrator | 2025-04-17 01:46:32.589607 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-17 01:46:32.589620 | orchestrator | Thursday 17 April 2025 01:45:36 +0000 (0:00:03.569) 0:01:03.951 ******** 2025-04-17 01:46:32.589634 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-17 01:46:32.589649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-17 01:46:32.589663 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-17 01:46:32.589677 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-17 01:46:32.589691 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-17 01:46:32.589705 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-17 01:46:32.589718 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-17 01:46:32.589733 | orchestrator | 2025-04-17 01:46:32.589746 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-17 01:46:32.589760 | orchestrator | Thursday 17 April 2025 01:45:38 +0000 (0:00:02.710) 0:01:06.662 ******** 2025-04-17 01:46:32.589774 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.589788 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:46:32.589802 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.589816 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.589830 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.589850 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.589864 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.589878 | orchestrator | 2025-04-17 01:46:32.589905 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-17 01:46:32.589920 | orchestrator | Thursday 17 April 2025 01:45:41 +0000 (0:00:02.512) 0:01:09.175 ******** 2025-04-17 01:46:32.589934 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.589955 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.589971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.589985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.590000 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590091 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.590144 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590159 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.590195 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590210 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.590239 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590259 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.590353 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590372 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590387 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:46:32.590422 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590436 | orchestrator | 2025-04-17 01:46:32.590451 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-17 01:46:32.590465 | orchestrator | Thursday 17 April 2025 01:45:43 +0000 (0:00:02.530) 0:01:11.706 ******** 2025-04-17 01:46:32.590478 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-17 01:46:32.590501 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-17 01:46:32.590521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-17 01:46:32.590535 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-17 01:46:32.590549 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-17 01:46:32.590563 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-17 01:46:32.590577 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-17 01:46:32.590590 | orchestrator | 2025-04-17 01:46:32.590604 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-17 01:46:32.590629 | orchestrator | Thursday 17 April 2025 01:45:46 +0000 (0:00:02.494) 0:01:14.200 ******** 2025-04-17 01:46:32.590644 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-17 01:46:32.590658 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-17 01:46:32.590672 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-17 01:46:32.590685 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-17 01:46:32.590699 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-17 01:46:32.590713 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-17 01:46:32.590727 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-17 01:46:32.590740 | orchestrator | 2025-04-17 01:46:32.590754 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-17 01:46:32.590768 | orchestrator | Thursday 17 April 2025 01:45:48 +0000 (0:00:02.055) 0:01:16.256 ******** 2025-04-17 01:46:32.590782 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590797 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590885 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-17 01:46:32.590970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.590990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:46:32.591132 | orchestrator | 2025-04-17 01:46:32.591146 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-17 01:46:32.591160 | orchestrator | Thursday 17 April 2025 01:45:51 +0000 (0:00:03.235) 0:01:19.492 ******** 2025-04-17 01:46:32.591174 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.591199 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:46:32.591214 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.591228 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.591241 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.591255 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.591269 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.591283 | orchestrator | 2025-04-17 01:46:32.591297 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-17 01:46:32.591311 | orchestrator | Thursday 17 April 2025 01:45:53 +0000 (0:00:01.441) 0:01:20.933 ******** 2025-04-17 01:46:32.591382 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.591398 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:46:32.591412 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.591426 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.591450 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.591465 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.591478 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.591490 | orchestrator | 2025-04-17 01:46:32.591503 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-17 01:46:32.591515 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:01.131) 0:01:22.064 ******** 2025-04-17 01:46:32.591528 | orchestrator | 2025-04-17 01:46:32.591540 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-17 01:46:32.591552 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.069) 0:01:22.134 ******** 2025-04-17 01:46:32.591564 | orchestrator | 2025-04-17 01:46:32.591577 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-17 01:46:32.591589 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.061) 0:01:22.196 ******** 2025-04-17 01:46:32.591601 | orchestrator | 2025-04-17 01:46:32.591613 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-17 01:46:32.591625 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.059) 0:01:22.255 ******** 2025-04-17 01:46:32.591645 | orchestrator | 2025-04-17 01:46:32.591658 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-17 01:46:32.591670 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.173) 0:01:22.429 ******** 2025-04-17 01:46:32.591682 | orchestrator | 2025-04-17 01:46:32.591694 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-17 01:46:32.591707 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.047) 0:01:22.476 ******** 2025-04-17 01:46:32.591719 | orchestrator | 2025-04-17 01:46:32.591731 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-17 01:46:32.591743 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.050) 0:01:22.527 ******** 2025-04-17 01:46:32.591756 | orchestrator | 2025-04-17 01:46:32.591768 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-17 01:46:32.591780 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.170) 0:01:22.697 ******** 2025-04-17 01:46:32.591792 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:46:32.591805 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.591817 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.591829 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.591842 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.591854 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.591866 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.591879 | orchestrator | 2025-04-17 01:46:32.591891 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-17 01:46:32.591904 | orchestrator | Thursday 17 April 2025 01:46:03 +0000 (0:00:08.187) 0:01:30.885 ******** 2025-04-17 01:46:32.591916 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:46:32.591928 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.591940 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.591952 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.591965 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.592022 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.592035 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.592048 | orchestrator | 2025-04-17 01:46:32.592060 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-17 01:46:32.592073 | orchestrator | Thursday 17 April 2025 01:46:24 +0000 (0:00:21.599) 0:01:52.485 ******** 2025-04-17 01:46:32.592085 | orchestrator | ok: [testbed-manager] 2025-04-17 01:46:32.592097 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:46:32.592110 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:46:32.592122 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:46:32.592134 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:46:32.592146 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:46:32.592159 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:46:32.592171 | orchestrator | 2025-04-17 01:46:32.592183 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-17 01:46:32.592196 | orchestrator | Thursday 17 April 2025 01:46:26 +0000 (0:00:02.025) 0:01:54.510 ******** 2025-04-17 01:46:32.592208 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:46:32.592221 | orchestrator | changed: [testbed-manager] 2025-04-17 01:46:32.592233 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:46:32.592245 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:46:32.592257 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:46:32.592270 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:46:32.592282 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:46:32.592294 | orchestrator | 2025-04-17 01:46:32.592307 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:46:32.592335 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-17 01:46:32.592349 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-17 01:46:32.592369 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-17 01:46:32.592388 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-17 01:46:35.612772 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-17 01:46:35.612882 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-17 01:46:35.612900 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-17 01:46:35.612915 | orchestrator | 2025-04-17 01:46:35.612929 | orchestrator | 2025-04-17 01:46:35.612944 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:46:35.612958 | orchestrator | Thursday 17 April 2025 01:46:30 +0000 (0:00:03.964) 0:01:58.475 ******** 2025-04-17 01:46:35.612972 | orchestrator | =============================================================================== 2025-04-17 01:46:35.612986 | orchestrator | common : Ensure fluentd image is present for label check --------------- 30.85s 2025-04-17 01:46:35.613000 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 21.60s 2025-04-17 01:46:35.613022 | orchestrator | common : Restart fluentd container -------------------------------------- 8.19s 2025-04-17 01:46:35.613036 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.77s 2025-04-17 01:46:35.613065 | orchestrator | common : Copying over config.json files for services -------------------- 4.11s 2025-04-17 01:46:35.613080 | orchestrator | common : Restart cron container ----------------------------------------- 3.96s 2025-04-17 01:46:35.613094 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 3.57s 2025-04-17 01:46:35.613109 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.44s 2025-04-17 01:46:35.613123 | orchestrator | common : Check common containers ---------------------------------------- 3.24s 2025-04-17 01:46:35.613137 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.71s 2025-04-17 01:46:35.613150 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.55s 2025-04-17 01:46:35.613164 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.53s 2025-04-17 01:46:35.613178 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.51s 2025-04-17 01:46:35.613192 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.49s 2025-04-17 01:46:35.613205 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.41s 2025-04-17 01:46:35.613219 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.06s 2025-04-17 01:46:35.613233 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.03s 2025-04-17 01:46:35.613246 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.71s 2025-04-17 01:46:35.613261 | orchestrator | common : include_tasks -------------------------------------------------- 1.58s 2025-04-17 01:46:35.613274 | orchestrator | common : Creating log volume -------------------------------------------- 1.44s 2025-04-17 01:46:35.613288 | orchestrator | 2025-04-17 01:46:32 | INFO  | Task 82403170-7353-4b7a-9315-221a991e5a0c is in state STARTED 2025-04-17 01:46:35.613302 | orchestrator | 2025-04-17 01:46:32 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:35.613353 | orchestrator | 2025-04-17 01:46:32 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:35.613387 | orchestrator | 2025-04-17 01:46:35 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:35.613483 | orchestrator | 2025-04-17 01:46:35 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:35.613507 | orchestrator | 2025-04-17 01:46:35 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:35.614153 | orchestrator | 2025-04-17 01:46:35 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:35.614564 | orchestrator | 2025-04-17 01:46:35 | INFO  | Task 82403170-7353-4b7a-9315-221a991e5a0c is in state STARTED 2025-04-17 01:46:35.615128 | orchestrator | 2025-04-17 01:46:35 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:38.648724 | orchestrator | 2025-04-17 01:46:35 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:38.648852 | orchestrator | 2025-04-17 01:46:38 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:38.649152 | orchestrator | 2025-04-17 01:46:38 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:38.649706 | orchestrator | 2025-04-17 01:46:38 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:38.650875 | orchestrator | 2025-04-17 01:46:38 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:38.651032 | orchestrator | 2025-04-17 01:46:38 | INFO  | Task 82403170-7353-4b7a-9315-221a991e5a0c is in state STARTED 2025-04-17 01:46:38.651652 | orchestrator | 2025-04-17 01:46:38 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:41.698157 | orchestrator | 2025-04-17 01:46:38 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:41.698297 | orchestrator | 2025-04-17 01:46:41 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:41.698490 | orchestrator | 2025-04-17 01:46:41 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:41.698998 | orchestrator | 2025-04-17 01:46:41 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:41.699028 | orchestrator | 2025-04-17 01:46:41 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:41.700207 | orchestrator | 2025-04-17 01:46:41 | INFO  | Task 82403170-7353-4b7a-9315-221a991e5a0c is in state STARTED 2025-04-17 01:46:41.700497 | orchestrator | 2025-04-17 01:46:41 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:44.726241 | orchestrator | 2025-04-17 01:46:41 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:44.726459 | orchestrator | 2025-04-17 01:46:44 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:44.728007 | orchestrator | 2025-04-17 01:46:44 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:44.729205 | orchestrator | 2025-04-17 01:46:44 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:44.730988 | orchestrator | 2025-04-17 01:46:44 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:44.732954 | orchestrator | 2025-04-17 01:46:44 | INFO  | Task 82403170-7353-4b7a-9315-221a991e5a0c is in state STARTED 2025-04-17 01:46:44.733993 | orchestrator | 2025-04-17 01:46:44 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:47.766156 | orchestrator | 2025-04-17 01:46:44 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:47.766250 | orchestrator | 2025-04-17 01:46:47 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:47.767064 | orchestrator | 2025-04-17 01:46:47 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:47.768394 | orchestrator | 2025-04-17 01:46:47 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:47.770876 | orchestrator | 2025-04-17 01:46:47 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:47.772626 | orchestrator | 2025-04-17 01:46:47 | INFO  | Task 82403170-7353-4b7a-9315-221a991e5a0c is in state STARTED 2025-04-17 01:46:47.774236 | orchestrator | 2025-04-17 01:46:47 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:50.807929 | orchestrator | 2025-04-17 01:46:47 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:50.808051 | orchestrator | 2025-04-17 01:46:50 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:50.808200 | orchestrator | 2025-04-17 01:46:50 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:50.808914 | orchestrator | 2025-04-17 01:46:50 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:50.809354 | orchestrator | 2025-04-17 01:46:50 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:50.809662 | orchestrator | 2025-04-17 01:46:50 | INFO  | Task 82403170-7353-4b7a-9315-221a991e5a0c is in state SUCCESS 2025-04-17 01:46:50.810230 | orchestrator | 2025-04-17 01:46:50 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:53.857257 | orchestrator | 2025-04-17 01:46:50 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:53.857389 | orchestrator | 2025-04-17 01:46:53 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:53.858267 | orchestrator | 2025-04-17 01:46:53 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:53.858643 | orchestrator | 2025-04-17 01:46:53 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:53.859272 | orchestrator | 2025-04-17 01:46:53 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:53.859826 | orchestrator | 2025-04-17 01:46:53 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:46:53.860638 | orchestrator | 2025-04-17 01:46:53 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:56.892972 | orchestrator | 2025-04-17 01:46:53 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:56.893095 | orchestrator | 2025-04-17 01:46:56 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:56.893476 | orchestrator | 2025-04-17 01:46:56 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:56.895935 | orchestrator | 2025-04-17 01:46:56 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:56.896581 | orchestrator | 2025-04-17 01:46:56 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:56.897273 | orchestrator | 2025-04-17 01:46:56 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:46:56.898171 | orchestrator | 2025-04-17 01:46:56 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:46:56.898301 | orchestrator | 2025-04-17 01:46:56 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:46:59.927454 | orchestrator | 2025-04-17 01:46:59 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:46:59.930079 | orchestrator | 2025-04-17 01:46:59 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:46:59.933490 | orchestrator | 2025-04-17 01:46:59 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:46:59.938667 | orchestrator | 2025-04-17 01:46:59 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:46:59.938840 | orchestrator | 2025-04-17 01:46:59 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:46:59.938867 | orchestrator | 2025-04-17 01:46:59 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:02.973585 | orchestrator | 2025-04-17 01:46:59 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:02.973726 | orchestrator | 2025-04-17 01:47:02 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:02.976946 | orchestrator | 2025-04-17 01:47:02 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:02.977344 | orchestrator | 2025-04-17 01:47:02 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state STARTED 2025-04-17 01:47:02.977386 | orchestrator | 2025-04-17 01:47:02 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:02.980095 | orchestrator | 2025-04-17 01:47:02 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:02.980490 | orchestrator | 2025-04-17 01:47:02 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:06.023236 | orchestrator | 2025-04-17 01:47:02 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:06.023444 | orchestrator | 2025-04-17 01:47:06 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:06.024271 | orchestrator | 2025-04-17 01:47:06 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:06.024661 | orchestrator | 2025-04-17 01:47:06 | INFO  | Task dd2979ca-bbf2-481a-9fcb-fc16c328470e is in state SUCCESS 2025-04-17 01:47:06.030241 | orchestrator | 2025-04-17 01:47:06 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:06.031175 | orchestrator | 2025-04-17 01:47:06.031213 | orchestrator | 2025-04-17 01:47:06.031230 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:47:06.031246 | orchestrator | 2025-04-17 01:47:06.031262 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:47:06.031277 | orchestrator | Thursday 17 April 2025 01:46:34 +0000 (0:00:00.315) 0:00:00.315 ******** 2025-04-17 01:47:06.031293 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:47:06.031339 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:47:06.031355 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:47:06.031369 | orchestrator | 2025-04-17 01:47:06.031383 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:47:06.031397 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.494) 0:00:00.810 ******** 2025-04-17 01:47:06.031412 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-17 01:47:06.031427 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-17 01:47:06.031441 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-17 01:47:06.031455 | orchestrator | 2025-04-17 01:47:06.031468 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-17 01:47:06.031483 | orchestrator | 2025-04-17 01:47:06.031496 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-17 01:47:06.031511 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.480) 0:00:01.290 ******** 2025-04-17 01:47:06.031525 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:47:06.031568 | orchestrator | 2025-04-17 01:47:06.031583 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-17 01:47:06.031597 | orchestrator | Thursday 17 April 2025 01:46:36 +0000 (0:00:00.950) 0:00:02.241 ******** 2025-04-17 01:47:06.031652 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-17 01:47:06.031668 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-17 01:47:06.031682 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-17 01:47:06.031696 | orchestrator | 2025-04-17 01:47:06.031710 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-17 01:47:06.031724 | orchestrator | Thursday 17 April 2025 01:46:37 +0000 (0:00:00.722) 0:00:02.963 ******** 2025-04-17 01:47:06.031739 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-17 01:47:06.031756 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-17 01:47:06.031773 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-17 01:47:06.031789 | orchestrator | 2025-04-17 01:47:06.031806 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-17 01:47:06.031822 | orchestrator | Thursday 17 April 2025 01:46:39 +0000 (0:00:01.723) 0:00:04.687 ******** 2025-04-17 01:47:06.031838 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:06.031869 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:06.031885 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:06.031901 | orchestrator | 2025-04-17 01:47:06.031917 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-17 01:47:06.031933 | orchestrator | Thursday 17 April 2025 01:46:41 +0000 (0:00:02.498) 0:00:07.185 ******** 2025-04-17 01:47:06.031949 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:06.031965 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:06.031981 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:06.032028 | orchestrator | 2025-04-17 01:47:06.032053 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:47:06.032068 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:47:06.032113 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:47:06.032129 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:47:06.032143 | orchestrator | 2025-04-17 01:47:06.032157 | orchestrator | 2025-04-17 01:47:06.032171 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:47:06.032185 | orchestrator | Thursday 17 April 2025 01:46:49 +0000 (0:00:07.710) 0:00:14.896 ******** 2025-04-17 01:47:06.032199 | orchestrator | =============================================================================== 2025-04-17 01:47:06.032213 | orchestrator | memcached : Restart memcached container --------------------------------- 7.71s 2025-04-17 01:47:06.032227 | orchestrator | memcached : Check memcached container ----------------------------------- 2.50s 2025-04-17 01:47:06.032240 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.72s 2025-04-17 01:47:06.032254 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.95s 2025-04-17 01:47:06.032268 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.72s 2025-04-17 01:47:06.032282 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-04-17 01:47:06.032296 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-04-17 01:47:06.032346 | orchestrator | 2025-04-17 01:47:06.032360 | orchestrator | 2025-04-17 01:47:06.032375 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:47:06.032388 | orchestrator | 2025-04-17 01:47:06.032402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:47:06.032416 | orchestrator | Thursday 17 April 2025 01:46:34 +0000 (0:00:00.347) 0:00:00.347 ******** 2025-04-17 01:47:06.032440 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:47:06.032455 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:47:06.032469 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:47:06.032483 | orchestrator | 2025-04-17 01:47:06.032497 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:47:06.032523 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.387) 0:00:00.734 ******** 2025-04-17 01:47:06.032538 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-17 01:47:06.032552 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-17 01:47:06.032566 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-17 01:47:06.032580 | orchestrator | 2025-04-17 01:47:06.032594 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-17 01:47:06.032608 | orchestrator | 2025-04-17 01:47:06.032622 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-17 01:47:06.032636 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.409) 0:00:01.143 ******** 2025-04-17 01:47:06.032650 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:47:06.032680 | orchestrator | 2025-04-17 01:47:06.032694 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-17 01:47:06.032720 | orchestrator | Thursday 17 April 2025 01:46:36 +0000 (0:00:00.777) 0:00:01.921 ******** 2025-04-17 01:47:06.032737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032860 | orchestrator | 2025-04-17 01:47:06.032874 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-17 01:47:06.032889 | orchestrator | Thursday 17 April 2025 01:46:37 +0000 (0:00:01.459) 0:00:03.380 ******** 2025-04-17 01:47:06.032903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.032946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033055 | orchestrator | 2025-04-17 01:47:06.033069 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-17 01:47:06.033083 | orchestrator | Thursday 17 April 2025 01:46:40 +0000 (0:00:02.486) 0:00:05.867 ******** 2025-04-17 01:47:06.033097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033201 | orchestrator | 2025-04-17 01:47:06.033215 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-17 01:47:06.033230 | orchestrator | Thursday 17 April 2025 01:46:43 +0000 (0:00:03.202) 0:00:09.069 ******** 2025-04-17 01:47:06.033244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-17 01:47:06.033472 | orchestrator | 2025-04-17 01:47:06.033489 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-17 01:47:06.033503 | orchestrator | Thursday 17 April 2025 01:46:46 +0000 (0:00:02.461) 0:00:11.531 ******** 2025-04-17 01:47:06.033517 | orchestrator | 2025-04-17 01:47:06.033531 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-17 01:47:06.033545 | orchestrator | Thursday 17 April 2025 01:46:46 +0000 (0:00:00.108) 0:00:11.640 ******** 2025-04-17 01:47:06.033559 | orchestrator | 2025-04-17 01:47:06.033573 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-17 01:47:06.033587 | orchestrator | Thursday 17 April 2025 01:46:46 +0000 (0:00:00.105) 0:00:11.745 ******** 2025-04-17 01:47:06.033601 | orchestrator | 2025-04-17 01:47:06.033615 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-17 01:47:06.033629 | orchestrator | Thursday 17 April 2025 01:46:46 +0000 (0:00:00.201) 0:00:11.947 ******** 2025-04-17 01:47:06.033643 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:06.033657 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:06.033671 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:06.033685 | orchestrator | 2025-04-17 01:47:06.033699 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-17 01:47:06.033713 | orchestrator | Thursday 17 April 2025 01:46:54 +0000 (0:00:08.140) 0:00:20.087 ******** 2025-04-17 01:47:06.033726 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:06.033740 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:06.033754 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:06.033768 | orchestrator | 2025-04-17 01:47:06.033782 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:47:06.033796 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:47:06.033810 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:47:06.033824 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:47:06.033847 | orchestrator | 2025-04-17 01:47:06.033861 | orchestrator | 2025-04-17 01:47:06.033875 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:47:06.033889 | orchestrator | Thursday 17 April 2025 01:47:03 +0000 (0:00:08.865) 0:00:28.953 ******** 2025-04-17 01:47:06.033903 | orchestrator | =============================================================================== 2025-04-17 01:47:06.033917 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.87s 2025-04-17 01:47:06.033931 | orchestrator | redis : Restart redis container ----------------------------------------- 8.14s 2025-04-17 01:47:06.033945 | orchestrator | redis : Copying over redis config files --------------------------------- 3.20s 2025-04-17 01:47:06.033958 | orchestrator | redis : Copying over default config.json files -------------------------- 2.49s 2025-04-17 01:47:06.033972 | orchestrator | redis : Check redis containers ------------------------------------------ 2.46s 2025-04-17 01:47:06.033986 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.46s 2025-04-17 01:47:06.033999 | orchestrator | redis : include_tasks --------------------------------------------------- 0.78s 2025-04-17 01:47:06.034077 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.42s 2025-04-17 01:47:06.034095 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-04-17 01:47:06.034111 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-04-17 01:47:06.034133 | orchestrator | 2025-04-17 01:47:06 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:09.074278 | orchestrator | 2025-04-17 01:47:06 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:09.074452 | orchestrator | 2025-04-17 01:47:06 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:09.074479 | orchestrator | 2025-04-17 01:47:09 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:09.074720 | orchestrator | 2025-04-17 01:47:09 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:09.077144 | orchestrator | 2025-04-17 01:47:09 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:09.080203 | orchestrator | 2025-04-17 01:47:09 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:09.081368 | orchestrator | 2025-04-17 01:47:09 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:12.108284 | orchestrator | 2025-04-17 01:47:09 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:12.108464 | orchestrator | 2025-04-17 01:47:12 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:12.108584 | orchestrator | 2025-04-17 01:47:12 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:12.109882 | orchestrator | 2025-04-17 01:47:12 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:12.110279 | orchestrator | 2025-04-17 01:47:12 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:12.110761 | orchestrator | 2025-04-17 01:47:12 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:12.110966 | orchestrator | 2025-04-17 01:47:12 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:15.136867 | orchestrator | 2025-04-17 01:47:15 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:15.139842 | orchestrator | 2025-04-17 01:47:15 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:15.141030 | orchestrator | 2025-04-17 01:47:15 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:15.141103 | orchestrator | 2025-04-17 01:47:15 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:15.141137 | orchestrator | 2025-04-17 01:47:15 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:18.177353 | orchestrator | 2025-04-17 01:47:15 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:18.177480 | orchestrator | 2025-04-17 01:47:18 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:18.178230 | orchestrator | 2025-04-17 01:47:18 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:18.178268 | orchestrator | 2025-04-17 01:47:18 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:18.179382 | orchestrator | 2025-04-17 01:47:18 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:18.179500 | orchestrator | 2025-04-17 01:47:18 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:18.182835 | orchestrator | 2025-04-17 01:47:18 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:21.225135 | orchestrator | 2025-04-17 01:47:21 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:21.225896 | orchestrator | 2025-04-17 01:47:21 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:21.225945 | orchestrator | 2025-04-17 01:47:21 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:21.226250 | orchestrator | 2025-04-17 01:47:21 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:21.226859 | orchestrator | 2025-04-17 01:47:21 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:24.269634 | orchestrator | 2025-04-17 01:47:21 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:24.269839 | orchestrator | 2025-04-17 01:47:24 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:24.269954 | orchestrator | 2025-04-17 01:47:24 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:24.269997 | orchestrator | 2025-04-17 01:47:24 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:24.272069 | orchestrator | 2025-04-17 01:47:24 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:24.274014 | orchestrator | 2025-04-17 01:47:24 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:27.310695 | orchestrator | 2025-04-17 01:47:24 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:27.310875 | orchestrator | 2025-04-17 01:47:27 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:27.311999 | orchestrator | 2025-04-17 01:47:27 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:27.312515 | orchestrator | 2025-04-17 01:47:27 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:27.314345 | orchestrator | 2025-04-17 01:47:27 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:27.315549 | orchestrator | 2025-04-17 01:47:27 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:30.361912 | orchestrator | 2025-04-17 01:47:27 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:30.362142 | orchestrator | 2025-04-17 01:47:30 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:30.362858 | orchestrator | 2025-04-17 01:47:30 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:30.367699 | orchestrator | 2025-04-17 01:47:30 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:30.370129 | orchestrator | 2025-04-17 01:47:30 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:30.372850 | orchestrator | 2025-04-17 01:47:30 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:33.410588 | orchestrator | 2025-04-17 01:47:30 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:33.410761 | orchestrator | 2025-04-17 01:47:33 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:33.411596 | orchestrator | 2025-04-17 01:47:33 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:33.416790 | orchestrator | 2025-04-17 01:47:33 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:33.418272 | orchestrator | 2025-04-17 01:47:33 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:33.418937 | orchestrator | 2025-04-17 01:47:33 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:33.419017 | orchestrator | 2025-04-17 01:47:33 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:36.463164 | orchestrator | 2025-04-17 01:47:36 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:36.463390 | orchestrator | 2025-04-17 01:47:36 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:36.464004 | orchestrator | 2025-04-17 01:47:36 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:36.464622 | orchestrator | 2025-04-17 01:47:36 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:36.465098 | orchestrator | 2025-04-17 01:47:36 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:39.503033 | orchestrator | 2025-04-17 01:47:36 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:39.503239 | orchestrator | 2025-04-17 01:47:39 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:39.503355 | orchestrator | 2025-04-17 01:47:39 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:39.504790 | orchestrator | 2025-04-17 01:47:39 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:39.506005 | orchestrator | 2025-04-17 01:47:39 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:39.506103 | orchestrator | 2025-04-17 01:47:39 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:42.533913 | orchestrator | 2025-04-17 01:47:39 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:42.534081 | orchestrator | 2025-04-17 01:47:42 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:42.534688 | orchestrator | 2025-04-17 01:47:42 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:42.535234 | orchestrator | 2025-04-17 01:47:42 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state STARTED 2025-04-17 01:47:42.536319 | orchestrator | 2025-04-17 01:47:42 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:42.537213 | orchestrator | 2025-04-17 01:47:42 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:45.564099 | orchestrator | 2025-04-17 01:47:42 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:45.564224 | orchestrator | 2025-04-17 01:47:45 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:45.564519 | orchestrator | 2025-04-17 01:47:45 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:45.564555 | orchestrator | 2025-04-17 01:47:45 | INFO  | Task d069001c-f33f-4397-9d75-217b4ec64953 is in state SUCCESS 2025-04-17 01:47:45.566945 | orchestrator | 2025-04-17 01:47:45.566990 | orchestrator | 2025-04-17 01:47:45.567005 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:47:45.567020 | orchestrator | 2025-04-17 01:47:45.567034 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:47:45.567048 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.354) 0:00:00.354 ******** 2025-04-17 01:47:45.567062 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:47:45.567078 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:47:45.567103 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:47:45.567117 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:47:45.567131 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:47:45.567144 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:47:45.567158 | orchestrator | 2025-04-17 01:47:45.567172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:47:45.567186 | orchestrator | Thursday 17 April 2025 01:46:36 +0000 (0:00:00.826) 0:00:01.181 ******** 2025-04-17 01:47:45.567200 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-17 01:47:45.567214 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-17 01:47:45.567228 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-17 01:47:45.567242 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-17 01:47:45.567256 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-17 01:47:45.567269 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-17 01:47:45.567342 | orchestrator | 2025-04-17 01:47:45.567358 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-17 01:47:45.567373 | orchestrator | 2025-04-17 01:47:45.567392 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-17 01:47:45.567406 | orchestrator | Thursday 17 April 2025 01:46:37 +0000 (0:00:00.921) 0:00:02.102 ******** 2025-04-17 01:47:45.567421 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:47:45.567436 | orchestrator | 2025-04-17 01:47:45.567450 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-17 01:47:45.567464 | orchestrator | Thursday 17 April 2025 01:46:38 +0000 (0:00:01.263) 0:00:03.366 ******** 2025-04-17 01:47:45.567478 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-17 01:47:45.567492 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-17 01:47:45.567506 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-17 01:47:45.567520 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-17 01:47:45.567534 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-17 01:47:45.567548 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-17 01:47:45.567564 | orchestrator | 2025-04-17 01:47:45.567580 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-17 01:47:45.567595 | orchestrator | Thursday 17 April 2025 01:46:39 +0000 (0:00:00.989) 0:00:04.355 ******** 2025-04-17 01:47:45.567611 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-17 01:47:45.567648 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-17 01:47:45.567663 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-17 01:47:45.567677 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-17 01:47:45.567691 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-17 01:47:45.567705 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-17 01:47:45.567719 | orchestrator | 2025-04-17 01:47:45.567732 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-17 01:47:45.567746 | orchestrator | Thursday 17 April 2025 01:46:41 +0000 (0:00:02.096) 0:00:06.452 ******** 2025-04-17 01:47:45.567760 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-17 01:47:45.567774 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:47:45.567788 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-17 01:47:45.567802 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:47:45.567816 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-17 01:47:45.567830 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:47:45.567844 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-17 01:47:45.567858 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:47:45.567875 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-17 01:47:45.567889 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:47:45.567903 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-17 01:47:45.567918 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:47:45.567932 | orchestrator | 2025-04-17 01:47:45.567945 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-17 01:47:45.567959 | orchestrator | Thursday 17 April 2025 01:46:42 +0000 (0:00:01.476) 0:00:07.928 ******** 2025-04-17 01:47:45.567973 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:47:45.567987 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:47:45.568000 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:47:45.568014 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:47:45.568028 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:47:45.568041 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:47:45.568055 | orchestrator | 2025-04-17 01:47:45.568069 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-17 01:47:45.568083 | orchestrator | Thursday 17 April 2025 01:46:43 +0000 (0:00:00.610) 0:00:08.538 ******** 2025-04-17 01:47:45.568111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568336 | orchestrator | 2025-04-17 01:47:45.568351 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-17 01:47:45.568365 | orchestrator | Thursday 17 April 2025 01:46:45 +0000 (0:00:02.104) 0:00:10.643 ******** 2025-04-17 01:47:45.568379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.568665 | orchestrator | 2025-04-17 01:47:45.568679 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-17 01:47:45.568693 | orchestrator | Thursday 17 April 2025 01:46:49 +0000 (0:00:03.377) 0:00:14.020 ******** 2025-04-17 01:47:45.568707 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:47:45.568721 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:47:45.568735 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:45.568748 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:47:45.568762 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:45.568775 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:45.568789 | orchestrator | 2025-04-17 01:47:45.568803 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-17 01:47:45.568816 | orchestrator | Thursday 17 April 2025 01:46:51 +0000 (0:00:02.053) 0:00:16.073 ******** 2025-04-17 01:47:45.568830 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:47:45.568843 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:47:45.568857 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:47:45.568870 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:45.568884 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:45.568898 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:45.568911 | orchestrator | 2025-04-17 01:47:45.568925 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-17 01:47:45.568939 | orchestrator | Thursday 17 April 2025 01:46:53 +0000 (0:00:02.068) 0:00:18.142 ******** 2025-04-17 01:47:45.568952 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:47:45.568966 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:47:45.568979 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:47:45.568993 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:47:45.569007 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:47:45.569021 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:47:45.569035 | orchestrator | 2025-04-17 01:47:45.569048 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-17 01:47:45.569062 | orchestrator | Thursday 17 April 2025 01:46:54 +0000 (0:00:01.069) 0:00:19.211 ******** 2025-04-17 01:47:45.569076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569216 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-17 01:47:45.569274 | orchestrator | 2025-04-17 01:47:45.569315 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-17 01:47:45.569330 | orchestrator | Thursday 17 April 2025 01:46:56 +0000 (0:00:02.652) 0:00:21.864 ******** 2025-04-17 01:47:45.569344 | orchestrator | 2025-04-17 01:47:45.569358 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-17 01:47:45.569372 | orchestrator | Thursday 17 April 2025 01:46:57 +0000 (0:00:00.182) 0:00:22.046 ******** 2025-04-17 01:47:45.569386 | orchestrator | 2025-04-17 01:47:45.569400 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-17 01:47:45.569414 | orchestrator | Thursday 17 April 2025 01:46:57 +0000 (0:00:00.398) 0:00:22.445 ******** 2025-04-17 01:47:45.569427 | orchestrator | 2025-04-17 01:47:45.569441 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-17 01:47:45.569463 | orchestrator | Thursday 17 April 2025 01:46:57 +0000 (0:00:00.119) 0:00:22.565 ******** 2025-04-17 01:47:45.569477 | orchestrator | 2025-04-17 01:47:45.569490 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-17 01:47:45.569504 | orchestrator | Thursday 17 April 2025 01:46:57 +0000 (0:00:00.226) 0:00:22.791 ******** 2025-04-17 01:47:45.569518 | orchestrator | 2025-04-17 01:47:45.569536 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-17 01:47:45.569550 | orchestrator | Thursday 17 April 2025 01:46:57 +0000 (0:00:00.117) 0:00:22.909 ******** 2025-04-17 01:47:45.569564 | orchestrator | 2025-04-17 01:47:45.569578 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-17 01:47:45.569591 | orchestrator | Thursday 17 April 2025 01:46:58 +0000 (0:00:00.214) 0:00:23.124 ******** 2025-04-17 01:47:45.569605 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:45.569619 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:45.569632 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:45.569646 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:47:45.569660 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:47:45.569673 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:47:45.569687 | orchestrator | 2025-04-17 01:47:45.569700 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-17 01:47:45.569714 | orchestrator | Thursday 17 April 2025 01:47:08 +0000 (0:00:10.787) 0:00:33.912 ******** 2025-04-17 01:47:45.569734 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:47:45.569748 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:47:45.569762 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:47:45.569776 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:47:45.569789 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:47:45.569803 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:47:45.569817 | orchestrator | 2025-04-17 01:47:45.569830 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-17 01:47:45.569844 | orchestrator | Thursday 17 April 2025 01:47:11 +0000 (0:00:02.417) 0:00:36.329 ******** 2025-04-17 01:47:45.569858 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:47:45.569872 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:47:45.569885 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:47:45.569899 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:45.569913 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:45.569927 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:45.569949 | orchestrator | 2025-04-17 01:47:45.569963 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-17 01:47:45.569977 | orchestrator | Thursday 17 April 2025 01:47:20 +0000 (0:00:08.997) 0:00:45.327 ******** 2025-04-17 01:47:45.569991 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-17 01:47:45.570010 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-17 01:47:45.570060 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-17 01:47:45.570075 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-17 01:47:45.570089 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-17 01:47:45.570103 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-17 01:47:45.570117 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-17 01:47:45.570131 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-17 01:47:45.570144 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-17 01:47:45.570165 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-17 01:47:45.570178 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-17 01:47:45.570192 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-17 01:47:45.570206 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-17 01:47:45.570220 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-17 01:47:45.570233 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-17 01:47:45.570246 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-17 01:47:45.570260 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-17 01:47:45.570274 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-17 01:47:45.570303 | orchestrator | 2025-04-17 01:47:45.570318 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-17 01:47:45.570332 | orchestrator | Thursday 17 April 2025 01:47:27 +0000 (0:00:07.208) 0:00:52.535 ******** 2025-04-17 01:47:45.570345 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-17 01:47:45.570360 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:47:45.570375 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-17 01:47:45.570389 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:47:45.570403 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-17 01:47:45.570417 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:47:45.570431 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-17 01:47:45.570444 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-17 01:47:45.570458 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-17 01:47:45.570471 | orchestrator | 2025-04-17 01:47:45.570485 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-17 01:47:45.570498 | orchestrator | Thursday 17 April 2025 01:47:30 +0000 (0:00:02.783) 0:00:55.319 ******** 2025-04-17 01:47:45.570512 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-17 01:47:45.570525 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:47:45.570539 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-17 01:47:45.570552 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:47:45.570565 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-17 01:47:45.570579 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:47:45.570593 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-17 01:47:45.570614 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-17 01:47:48.604527 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-17 01:47:48.604667 | orchestrator | 2025-04-17 01:47:48.604689 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-17 01:47:48.604705 | orchestrator | Thursday 17 April 2025 01:47:34 +0000 (0:00:04.370) 0:00:59.690 ******** 2025-04-17 01:47:48.604719 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:47:48.604734 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:47:48.604748 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:47:48.604763 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:47:48.604776 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:47:48.604790 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:47:48.604804 | orchestrator | 2025-04-17 01:47:48.604818 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:47:48.604917 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 01:47:48.604937 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 01:47:48.604951 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 01:47:48.604965 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 01:47:48.604979 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 01:47:48.605011 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 01:47:48.605026 | orchestrator | 2025-04-17 01:47:48.605040 | orchestrator | 2025-04-17 01:47:48.605058 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:47:48.605082 | orchestrator | Thursday 17 April 2025 01:47:42 +0000 (0:00:08.024) 0:01:07.714 ******** 2025-04-17 01:47:48.605107 | orchestrator | =============================================================================== 2025-04-17 01:47:48.605134 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.02s 2025-04-17 01:47:48.605160 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.79s 2025-04-17 01:47:48.605176 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.21s 2025-04-17 01:47:48.605190 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.37s 2025-04-17 01:47:48.605205 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.38s 2025-04-17 01:47:48.605221 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.78s 2025-04-17 01:47:48.605237 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.65s 2025-04-17 01:47:48.605253 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.42s 2025-04-17 01:47:48.605269 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.10s 2025-04-17 01:47:48.605347 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.10s 2025-04-17 01:47:48.605364 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.07s 2025-04-17 01:47:48.605381 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.05s 2025-04-17 01:47:48.605403 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.48s 2025-04-17 01:47:48.605418 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.26s 2025-04-17 01:47:48.605465 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.26s 2025-04-17 01:47:48.605481 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.07s 2025-04-17 01:47:48.605495 | orchestrator | module-load : Load modules ---------------------------------------------- 0.99s 2025-04-17 01:47:48.605509 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-04-17 01:47:48.605522 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2025-04-17 01:47:48.605536 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.61s 2025-04-17 01:47:48.605550 | orchestrator | 2025-04-17 01:47:45 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:47:48.605564 | orchestrator | 2025-04-17 01:47:45 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:48.605578 | orchestrator | 2025-04-17 01:47:45 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:48.605603 | orchestrator | 2025-04-17 01:47:45 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:48.605665 | orchestrator | 2025-04-17 01:47:48 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:48.605785 | orchestrator | 2025-04-17 01:47:48 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:48.605804 | orchestrator | 2025-04-17 01:47:48 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:47:48.605823 | orchestrator | 2025-04-17 01:47:48 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:51.632121 | orchestrator | 2025-04-17 01:47:48 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:51.632237 | orchestrator | 2025-04-17 01:47:48 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:51.632335 | orchestrator | 2025-04-17 01:47:51 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:51.635421 | orchestrator | 2025-04-17 01:47:51 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:51.635633 | orchestrator | 2025-04-17 01:47:51 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:47:51.636074 | orchestrator | 2025-04-17 01:47:51 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:51.636659 | orchestrator | 2025-04-17 01:47:51 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:54.666915 | orchestrator | 2025-04-17 01:47:51 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:54.667059 | orchestrator | 2025-04-17 01:47:54 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:54.670479 | orchestrator | 2025-04-17 01:47:54 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:54.671622 | orchestrator | 2025-04-17 01:47:54 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:47:54.672067 | orchestrator | 2025-04-17 01:47:54 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:54.672752 | orchestrator | 2025-04-17 01:47:54 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:57.722222 | orchestrator | 2025-04-17 01:47:54 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:47:57.722406 | orchestrator | 2025-04-17 01:47:57 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:47:57.723077 | orchestrator | 2025-04-17 01:47:57 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:47:57.723116 | orchestrator | 2025-04-17 01:47:57 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:47:57.723747 | orchestrator | 2025-04-17 01:47:57 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:47:57.724161 | orchestrator | 2025-04-17 01:47:57 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:47:57.724423 | orchestrator | 2025-04-17 01:47:57 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:00.759826 | orchestrator | 2025-04-17 01:48:00 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:00.760127 | orchestrator | 2025-04-17 01:48:00 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:00.762313 | orchestrator | 2025-04-17 01:48:00 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:00.762963 | orchestrator | 2025-04-17 01:48:00 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:00.763001 | orchestrator | 2025-04-17 01:48:00 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:03.803954 | orchestrator | 2025-04-17 01:48:00 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:03.804167 | orchestrator | 2025-04-17 01:48:03 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:03.804261 | orchestrator | 2025-04-17 01:48:03 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:03.804345 | orchestrator | 2025-04-17 01:48:03 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:03.804362 | orchestrator | 2025-04-17 01:48:03 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:03.804382 | orchestrator | 2025-04-17 01:48:03 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:06.838076 | orchestrator | 2025-04-17 01:48:03 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:06.838260 | orchestrator | 2025-04-17 01:48:06 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:06.839535 | orchestrator | 2025-04-17 01:48:06 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:06.841007 | orchestrator | 2025-04-17 01:48:06 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:06.842529 | orchestrator | 2025-04-17 01:48:06 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:06.843698 | orchestrator | 2025-04-17 01:48:06 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:09.876865 | orchestrator | 2025-04-17 01:48:06 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:09.877019 | orchestrator | 2025-04-17 01:48:09 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:09.877160 | orchestrator | 2025-04-17 01:48:09 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:09.877189 | orchestrator | 2025-04-17 01:48:09 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:09.877933 | orchestrator | 2025-04-17 01:48:09 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:09.878635 | orchestrator | 2025-04-17 01:48:09 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:12.909900 | orchestrator | 2025-04-17 01:48:09 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:12.910074 | orchestrator | 2025-04-17 01:48:12 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:12.910848 | orchestrator | 2025-04-17 01:48:12 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:12.910885 | orchestrator | 2025-04-17 01:48:12 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:12.911647 | orchestrator | 2025-04-17 01:48:12 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:12.912391 | orchestrator | 2025-04-17 01:48:12 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:12.912627 | orchestrator | 2025-04-17 01:48:12 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:15.952263 | orchestrator | 2025-04-17 01:48:15 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:15.955969 | orchestrator | 2025-04-17 01:48:15 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:15.957168 | orchestrator | 2025-04-17 01:48:15 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:15.961246 | orchestrator | 2025-04-17 01:48:15 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:15.962190 | orchestrator | 2025-04-17 01:48:15 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:15.962325 | orchestrator | 2025-04-17 01:48:15 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:19.006572 | orchestrator | 2025-04-17 01:48:19 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:19.007648 | orchestrator | 2025-04-17 01:48:19 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:19.007687 | orchestrator | 2025-04-17 01:48:19 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:19.007703 | orchestrator | 2025-04-17 01:48:19 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:19.007727 | orchestrator | 2025-04-17 01:48:19 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:22.066199 | orchestrator | 2025-04-17 01:48:19 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:22.066354 | orchestrator | 2025-04-17 01:48:22 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:22.066814 | orchestrator | 2025-04-17 01:48:22 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:22.066866 | orchestrator | 2025-04-17 01:48:22 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:22.067321 | orchestrator | 2025-04-17 01:48:22 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:22.068020 | orchestrator | 2025-04-17 01:48:22 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:25.096494 | orchestrator | 2025-04-17 01:48:22 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:25.096580 | orchestrator | 2025-04-17 01:48:25 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:25.097896 | orchestrator | 2025-04-17 01:48:25 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:25.099654 | orchestrator | 2025-04-17 01:48:25 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:25.100803 | orchestrator | 2025-04-17 01:48:25 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:25.102308 | orchestrator | 2025-04-17 01:48:25 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:25.102762 | orchestrator | 2025-04-17 01:48:25 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:28.139424 | orchestrator | 2025-04-17 01:48:28 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:28.141681 | orchestrator | 2025-04-17 01:48:28 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:28.144066 | orchestrator | 2025-04-17 01:48:28 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:28.145901 | orchestrator | 2025-04-17 01:48:28 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:28.148576 | orchestrator | 2025-04-17 01:48:28 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:28.148848 | orchestrator | 2025-04-17 01:48:28 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:31.197946 | orchestrator | 2025-04-17 01:48:31 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:31.201569 | orchestrator | 2025-04-17 01:48:31 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:31.201629 | orchestrator | 2025-04-17 01:48:31 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:31.202131 | orchestrator | 2025-04-17 01:48:31 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:31.202721 | orchestrator | 2025-04-17 01:48:31 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:31.205991 | orchestrator | 2025-04-17 01:48:31 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:34.246141 | orchestrator | 2025-04-17 01:48:34 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:34.248932 | orchestrator | 2025-04-17 01:48:34 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:34.248977 | orchestrator | 2025-04-17 01:48:34 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:34.251840 | orchestrator | 2025-04-17 01:48:34 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:34.251880 | orchestrator | 2025-04-17 01:48:34 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:37.292692 | orchestrator | 2025-04-17 01:48:34 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:37.292827 | orchestrator | 2025-04-17 01:48:37 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:37.293012 | orchestrator | 2025-04-17 01:48:37 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:37.296114 | orchestrator | 2025-04-17 01:48:37 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:37.296317 | orchestrator | 2025-04-17 01:48:37 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:37.296361 | orchestrator | 2025-04-17 01:48:37 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:37.296395 | orchestrator | 2025-04-17 01:48:37 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:40.330674 | orchestrator | 2025-04-17 01:48:40 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:40.330877 | orchestrator | 2025-04-17 01:48:40 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:40.332354 | orchestrator | 2025-04-17 01:48:40 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:40.333302 | orchestrator | 2025-04-17 01:48:40 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:40.340781 | orchestrator | 2025-04-17 01:48:40 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:43.375985 | orchestrator | 2025-04-17 01:48:40 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:43.376121 | orchestrator | 2025-04-17 01:48:43 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:43.376939 | orchestrator | 2025-04-17 01:48:43 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:43.377380 | orchestrator | 2025-04-17 01:48:43 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:43.379084 | orchestrator | 2025-04-17 01:48:43 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:43.379537 | orchestrator | 2025-04-17 01:48:43 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:46.402546 | orchestrator | 2025-04-17 01:48:43 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:46.402688 | orchestrator | 2025-04-17 01:48:46 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:46.403724 | orchestrator | 2025-04-17 01:48:46 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:46.404737 | orchestrator | 2025-04-17 01:48:46 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:46.405360 | orchestrator | 2025-04-17 01:48:46 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:46.406222 | orchestrator | 2025-04-17 01:48:46 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:49.448763 | orchestrator | 2025-04-17 01:48:46 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:49.448917 | orchestrator | 2025-04-17 01:48:49 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:49.449086 | orchestrator | 2025-04-17 01:48:49 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:49.449906 | orchestrator | 2025-04-17 01:48:49 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:49.450924 | orchestrator | 2025-04-17 01:48:49 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:49.451650 | orchestrator | 2025-04-17 01:48:49 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:52.484722 | orchestrator | 2025-04-17 01:48:49 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:52.484856 | orchestrator | 2025-04-17 01:48:52 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:52.489676 | orchestrator | 2025-04-17 01:48:52 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:52.494614 | orchestrator | 2025-04-17 01:48:52 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:52.496621 | orchestrator | 2025-04-17 01:48:52 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:52.498951 | orchestrator | 2025-04-17 01:48:52 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:55.531288 | orchestrator | 2025-04-17 01:48:52 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:55.531392 | orchestrator | 2025-04-17 01:48:55 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:55.532503 | orchestrator | 2025-04-17 01:48:55 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:55.534940 | orchestrator | 2025-04-17 01:48:55 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:55.535982 | orchestrator | 2025-04-17 01:48:55 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state STARTED 2025-04-17 01:48:55.540371 | orchestrator | 2025-04-17 01:48:55 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:48:55.540429 | orchestrator | 2025-04-17 01:48:55 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:48:58.588476 | orchestrator | 2025-04-17 01:48:58 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:48:58.589877 | orchestrator | 2025-04-17 01:48:58 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:48:58.589982 | orchestrator | 2025-04-17 01:48:58 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:48:58.591324 | orchestrator | 2025-04-17 01:48:58 | INFO  | Task 61b9c10e-e05c-4862-94d3-809682cc7536 is in state SUCCESS 2025-04-17 01:48:58.592888 | orchestrator | 2025-04-17 01:48:58.593008 | orchestrator | 2025-04-17 01:48:58.593029 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-17 01:48:58.593045 | orchestrator | 2025-04-17 01:48:58.593060 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-17 01:48:58.593074 | orchestrator | Thursday 17 April 2025 01:46:53 +0000 (0:00:00.171) 0:00:00.171 ******** 2025-04-17 01:48:58.593088 | orchestrator | ok: [localhost] => { 2025-04-17 01:48:58.593105 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-17 01:48:58.593120 | orchestrator | } 2025-04-17 01:48:58.593156 | orchestrator | 2025-04-17 01:48:58.593171 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-17 01:48:58.593201 | orchestrator | Thursday 17 April 2025 01:46:53 +0000 (0:00:00.085) 0:00:00.257 ******** 2025-04-17 01:48:58.593216 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-17 01:48:58.593231 | orchestrator | ...ignoring 2025-04-17 01:48:58.593287 | orchestrator | 2025-04-17 01:48:58.593304 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-17 01:48:58.593318 | orchestrator | Thursday 17 April 2025 01:46:56 +0000 (0:00:02.600) 0:00:02.857 ******** 2025-04-17 01:48:58.593331 | orchestrator | skipping: [localhost] 2025-04-17 01:48:58.593345 | orchestrator | 2025-04-17 01:48:58.593359 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-17 01:48:58.593376 | orchestrator | Thursday 17 April 2025 01:46:56 +0000 (0:00:00.050) 0:00:02.908 ******** 2025-04-17 01:48:58.593399 | orchestrator | ok: [localhost] 2025-04-17 01:48:58.593421 | orchestrator | 2025-04-17 01:48:58.593444 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:48:58.593467 | orchestrator | 2025-04-17 01:48:58.593490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:48:58.593513 | orchestrator | Thursday 17 April 2025 01:46:56 +0000 (0:00:00.115) 0:00:03.024 ******** 2025-04-17 01:48:58.593536 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:48:58.593560 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:48:58.593586 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:48:58.593608 | orchestrator | 2025-04-17 01:48:58.593632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:48:58.593656 | orchestrator | Thursday 17 April 2025 01:46:56 +0000 (0:00:00.323) 0:00:03.347 ******** 2025-04-17 01:48:58.593680 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-17 01:48:58.593704 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-17 01:48:58.593727 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-17 01:48:58.593752 | orchestrator | 2025-04-17 01:48:58.593770 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-17 01:48:58.593784 | orchestrator | 2025-04-17 01:48:58.593798 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-17 01:48:58.593811 | orchestrator | Thursday 17 April 2025 01:46:57 +0000 (0:00:00.486) 0:00:03.833 ******** 2025-04-17 01:48:58.593826 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:48:58.593840 | orchestrator | 2025-04-17 01:48:58.593854 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-17 01:48:58.593868 | orchestrator | Thursday 17 April 2025 01:46:57 +0000 (0:00:00.610) 0:00:04.444 ******** 2025-04-17 01:48:58.593882 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:48:58.593920 | orchestrator | 2025-04-17 01:48:58.593934 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-17 01:48:58.593948 | orchestrator | Thursday 17 April 2025 01:46:58 +0000 (0:00:01.005) 0:00:05.449 ******** 2025-04-17 01:48:58.593962 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.593977 | orchestrator | 2025-04-17 01:48:58.593991 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-17 01:48:58.594005 | orchestrator | Thursday 17 April 2025 01:46:59 +0000 (0:00:00.649) 0:00:06.099 ******** 2025-04-17 01:48:58.594079 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.594098 | orchestrator | 2025-04-17 01:48:58.594113 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-17 01:48:58.594148 | orchestrator | Thursday 17 April 2025 01:46:59 +0000 (0:00:00.512) 0:00:06.611 ******** 2025-04-17 01:48:58.594162 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.594176 | orchestrator | 2025-04-17 01:48:58.594190 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-17 01:48:58.594204 | orchestrator | Thursday 17 April 2025 01:47:00 +0000 (0:00:00.342) 0:00:06.954 ******** 2025-04-17 01:48:58.594218 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.594231 | orchestrator | 2025-04-17 01:48:58.594282 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-17 01:48:58.594297 | orchestrator | Thursday 17 April 2025 01:47:00 +0000 (0:00:00.364) 0:00:07.318 ******** 2025-04-17 01:48:58.594311 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:48:58.594325 | orchestrator | 2025-04-17 01:48:58.594339 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-17 01:48:58.594352 | orchestrator | Thursday 17 April 2025 01:47:01 +0000 (0:00:00.740) 0:00:08.059 ******** 2025-04-17 01:48:58.594366 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:48:58.594380 | orchestrator | 2025-04-17 01:48:58.594394 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-17 01:48:58.594408 | orchestrator | Thursday 17 April 2025 01:47:02 +0000 (0:00:00.775) 0:00:08.834 ******** 2025-04-17 01:48:58.594421 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.594435 | orchestrator | 2025-04-17 01:48:58.594449 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-17 01:48:58.594464 | orchestrator | Thursday 17 April 2025 01:47:02 +0000 (0:00:00.328) 0:00:09.163 ******** 2025-04-17 01:48:58.594478 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.594492 | orchestrator | 2025-04-17 01:48:58.594519 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-17 01:48:58.594534 | orchestrator | Thursday 17 April 2025 01:47:02 +0000 (0:00:00.300) 0:00:09.464 ******** 2025-04-17 01:48:58.594553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.594572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.594599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.594614 | orchestrator | 2025-04-17 01:48:58.594628 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-17 01:48:58.594642 | orchestrator | Thursday 17 April 2025 01:47:03 +0000 (0:00:00.919) 0:00:10.384 ******** 2025-04-17 01:48:58.594670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.594686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.594709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.594725 | orchestrator | 2025-04-17 01:48:58.594739 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-17 01:48:58.594753 | orchestrator | Thursday 17 April 2025 01:47:05 +0000 (0:00:01.588) 0:00:11.973 ******** 2025-04-17 01:48:58.594768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-17 01:48:58.594782 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-17 01:48:58.594796 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-17 01:48:58.594810 | orchestrator | 2025-04-17 01:48:58.594824 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-17 01:48:58.594845 | orchestrator | Thursday 17 April 2025 01:47:07 +0000 (0:00:02.354) 0:00:14.328 ******** 2025-04-17 01:48:58.594860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-17 01:48:58.594875 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-17 01:48:58.594889 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-17 01:48:58.594902 | orchestrator | 2025-04-17 01:48:58.594916 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-17 01:48:58.594930 | orchestrator | Thursday 17 April 2025 01:47:10 +0000 (0:00:02.612) 0:00:16.940 ******** 2025-04-17 01:48:58.594944 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-17 01:48:58.594958 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-17 01:48:58.594972 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-17 01:48:58.594986 | orchestrator | 2025-04-17 01:48:58.595007 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-17 01:48:58.595021 | orchestrator | Thursday 17 April 2025 01:47:12 +0000 (0:00:01.879) 0:00:18.819 ******** 2025-04-17 01:48:58.595036 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-17 01:48:58.595050 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-17 01:48:58.595064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-17 01:48:58.595078 | orchestrator | 2025-04-17 01:48:58.595092 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-17 01:48:58.595114 | orchestrator | Thursday 17 April 2025 01:47:14 +0000 (0:00:01.982) 0:00:20.801 ******** 2025-04-17 01:48:58.595129 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-17 01:48:58.595143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-17 01:48:58.595156 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-17 01:48:58.595171 | orchestrator | 2025-04-17 01:48:58.595185 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-17 01:48:58.595199 | orchestrator | Thursday 17 April 2025 01:47:15 +0000 (0:00:01.482) 0:00:22.284 ******** 2025-04-17 01:48:58.595213 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-17 01:48:58.595227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-17 01:48:58.595285 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-17 01:48:58.595303 | orchestrator | 2025-04-17 01:48:58.595317 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-17 01:48:58.595337 | orchestrator | Thursday 17 April 2025 01:47:16 +0000 (0:00:01.394) 0:00:23.679 ******** 2025-04-17 01:48:58.595351 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.595365 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:48:58.595380 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:48:58.595394 | orchestrator | 2025-04-17 01:48:58.595408 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-17 01:48:58.595422 | orchestrator | Thursday 17 April 2025 01:47:17 +0000 (0:00:00.711) 0:00:24.390 ******** 2025-04-17 01:48:58.595436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.595452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.595487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:48:58.595503 | orchestrator | 2025-04-17 01:48:58.595518 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-17 01:48:58.595532 | orchestrator | Thursday 17 April 2025 01:47:19 +0000 (0:00:01.499) 0:00:25.889 ******** 2025-04-17 01:48:58.595546 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:48:58.595559 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:48:58.595573 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:48:58.595587 | orchestrator | 2025-04-17 01:48:58.595601 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-17 01:48:58.595615 | orchestrator | Thursday 17 April 2025 01:47:20 +0000 (0:00:00.899) 0:00:26.789 ******** 2025-04-17 01:48:58.595628 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:48:58.595642 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:48:58.595656 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:48:58.595669 | orchestrator | 2025-04-17 01:48:58.595683 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-17 01:48:58.595697 | orchestrator | Thursday 17 April 2025 01:47:25 +0000 (0:00:05.410) 0:00:32.200 ******** 2025-04-17 01:48:58.595710 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:48:58.595724 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:48:58.595737 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:48:58.595751 | orchestrator | 2025-04-17 01:48:58.595765 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-17 01:48:58.595778 | orchestrator | 2025-04-17 01:48:58.595792 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-17 01:48:58.595806 | orchestrator | Thursday 17 April 2025 01:47:25 +0000 (0:00:00.265) 0:00:32.465 ******** 2025-04-17 01:48:58.595820 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:48:58.595834 | orchestrator | 2025-04-17 01:48:58.595847 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-17 01:48:58.595861 | orchestrator | Thursday 17 April 2025 01:47:26 +0000 (0:00:00.687) 0:00:33.152 ******** 2025-04-17 01:48:58.595874 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:48:58.595888 | orchestrator | 2025-04-17 01:48:58.595902 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-17 01:48:58.595916 | orchestrator | Thursday 17 April 2025 01:47:26 +0000 (0:00:00.247) 0:00:33.399 ******** 2025-04-17 01:48:58.595930 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:48:58.595944 | orchestrator | 2025-04-17 01:48:58.595959 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-17 01:48:58.595973 | orchestrator | Thursday 17 April 2025 01:47:28 +0000 (0:00:01.679) 0:00:35.079 ******** 2025-04-17 01:48:58.595987 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:48:58.596002 | orchestrator | 2025-04-17 01:48:58.596016 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-17 01:48:58.596030 | orchestrator | 2025-04-17 01:48:58.596043 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-17 01:48:58.596065 | orchestrator | Thursday 17 April 2025 01:48:21 +0000 (0:00:52.755) 0:01:27.835 ******** 2025-04-17 01:48:58.596080 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:48:58.596094 | orchestrator | 2025-04-17 01:48:58.596107 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-17 01:48:58.596122 | orchestrator | Thursday 17 April 2025 01:48:21 +0000 (0:00:00.628) 0:01:28.463 ******** 2025-04-17 01:48:58.596136 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:48:58.596150 | orchestrator | 2025-04-17 01:48:58.596165 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-17 01:48:58.596178 | orchestrator | Thursday 17 April 2025 01:48:22 +0000 (0:00:00.403) 0:01:28.867 ******** 2025-04-17 01:48:58.596196 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:48:58.596221 | orchestrator | 2025-04-17 01:48:58.596268 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-17 01:48:58.596293 | orchestrator | Thursday 17 April 2025 01:48:23 +0000 (0:00:01.629) 0:01:30.496 ******** 2025-04-17 01:48:58.596315 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:48:58.596338 | orchestrator | 2025-04-17 01:48:58.596361 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-17 01:48:58.596437 | orchestrator | 2025-04-17 01:48:58.596456 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-17 01:48:58.596471 | orchestrator | Thursday 17 April 2025 01:48:37 +0000 (0:00:13.753) 0:01:44.249 ******** 2025-04-17 01:48:58.596485 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:48:58.596499 | orchestrator | 2025-04-17 01:48:58.596513 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-17 01:48:58.596527 | orchestrator | Thursday 17 April 2025 01:48:38 +0000 (0:00:00.551) 0:01:44.800 ******** 2025-04-17 01:48:58.596542 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:48:58.596556 | orchestrator | 2025-04-17 01:48:58.596575 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-17 01:48:58.596599 | orchestrator | Thursday 17 April 2025 01:48:38 +0000 (0:00:00.182) 0:01:44.982 ******** 2025-04-17 01:48:58.596614 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:48:58.596627 | orchestrator | 2025-04-17 01:48:58.596641 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-17 01:48:58.596655 | orchestrator | Thursday 17 April 2025 01:48:40 +0000 (0:00:01.832) 0:01:46.815 ******** 2025-04-17 01:48:58.596668 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:48:58.596682 | orchestrator | 2025-04-17 01:48:58.596696 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-17 01:48:58.596710 | orchestrator | 2025-04-17 01:48:58.596724 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-17 01:48:58.596738 | orchestrator | Thursday 17 April 2025 01:48:53 +0000 (0:00:13.453) 0:02:00.277 ******** 2025-04-17 01:48:58.596752 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:48:58.596765 | orchestrator | 2025-04-17 01:48:58.596779 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-17 01:48:58.596793 | orchestrator | Thursday 17 April 2025 01:48:54 +0000 (0:00:00.460) 0:02:00.738 ******** 2025-04-17 01:48:58.596806 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-17 01:48:58.596820 | orchestrator | enable_outward_rabbitmq_True 2025-04-17 01:48:58.596841 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-17 01:48:58.596855 | orchestrator | outward_rabbitmq_restart 2025-04-17 01:48:58.596869 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:48:58.596883 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:48:58.596897 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:48:58.596910 | orchestrator | 2025-04-17 01:48:58.596924 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-17 01:48:58.596938 | orchestrator | skipping: no hosts matched 2025-04-17 01:48:58.596952 | orchestrator | 2025-04-17 01:48:58.596983 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-17 01:48:58.596998 | orchestrator | skipping: no hosts matched 2025-04-17 01:48:58.597012 | orchestrator | 2025-04-17 01:48:58.597026 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-17 01:48:58.597040 | orchestrator | skipping: no hosts matched 2025-04-17 01:48:58.597053 | orchestrator | 2025-04-17 01:48:58.597067 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:48:58.597082 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-17 01:48:58.597097 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-17 01:48:58.597111 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:48:58.597125 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-17 01:48:58.597139 | orchestrator | 2025-04-17 01:48:58.597153 | orchestrator | 2025-04-17 01:48:58.597166 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:48:58.597180 | orchestrator | Thursday 17 April 2025 01:48:56 +0000 (0:00:02.742) 0:02:03.480 ******** 2025-04-17 01:48:58.597194 | orchestrator | =============================================================================== 2025-04-17 01:48:58.597207 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.96s 2025-04-17 01:48:58.597221 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.41s 2025-04-17 01:48:58.597235 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.15s 2025-04-17 01:48:58.597269 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.74s 2025-04-17 01:48:58.597283 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.61s 2025-04-17 01:48:58.597297 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.60s 2025-04-17 01:48:58.597311 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.35s 2025-04-17 01:48:58.597325 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.98s 2025-04-17 01:48:58.597339 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.88s 2025-04-17 01:48:58.597353 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.87s 2025-04-17 01:48:58.597366 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.59s 2025-04-17 01:48:58.597380 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.50s 2025-04-17 01:48:58.597394 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.48s 2025-04-17 01:48:58.597408 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.39s 2025-04-17 01:48:58.597422 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2025-04-17 01:48:58.597441 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.92s 2025-04-17 01:48:58.597455 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.90s 2025-04-17 01:48:58.597469 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.83s 2025-04-17 01:48:58.597483 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.78s 2025-04-17 01:48:58.597497 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.74s 2025-04-17 01:48:58.597517 | orchestrator | 2025-04-17 01:48:58 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:01.635871 | orchestrator | 2025-04-17 01:48:58 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:01.636097 | orchestrator | 2025-04-17 01:49:01 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:01.637488 | orchestrator | 2025-04-17 01:49:01 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:01.641767 | orchestrator | 2025-04-17 01:49:01 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:01.642191 | orchestrator | 2025-04-17 01:49:01 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:04.683637 | orchestrator | 2025-04-17 01:49:01 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:04.683777 | orchestrator | 2025-04-17 01:49:04 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:04.686617 | orchestrator | 2025-04-17 01:49:04 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:04.689046 | orchestrator | 2025-04-17 01:49:04 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:04.691488 | orchestrator | 2025-04-17 01:49:04 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:07.744408 | orchestrator | 2025-04-17 01:49:04 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:07.744558 | orchestrator | 2025-04-17 01:49:07 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:07.745493 | orchestrator | 2025-04-17 01:49:07 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:07.747037 | orchestrator | 2025-04-17 01:49:07 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:07.748857 | orchestrator | 2025-04-17 01:49:07 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:10.808335 | orchestrator | 2025-04-17 01:49:07 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:10.808498 | orchestrator | 2025-04-17 01:49:10 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:10.815539 | orchestrator | 2025-04-17 01:49:10 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:10.820190 | orchestrator | 2025-04-17 01:49:10 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:10.821531 | orchestrator | 2025-04-17 01:49:10 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:13.864512 | orchestrator | 2025-04-17 01:49:10 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:13.864658 | orchestrator | 2025-04-17 01:49:13 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:13.865607 | orchestrator | 2025-04-17 01:49:13 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:13.866690 | orchestrator | 2025-04-17 01:49:13 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:13.866718 | orchestrator | 2025-04-17 01:49:13 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:13.867940 | orchestrator | 2025-04-17 01:49:13 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:16.914360 | orchestrator | 2025-04-17 01:49:16 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:16.916023 | orchestrator | 2025-04-17 01:49:16 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:16.917730 | orchestrator | 2025-04-17 01:49:16 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:16.919141 | orchestrator | 2025-04-17 01:49:16 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:19.961637 | orchestrator | 2025-04-17 01:49:16 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:19.961737 | orchestrator | 2025-04-17 01:49:19 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:19.965747 | orchestrator | 2025-04-17 01:49:19 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:19.968492 | orchestrator | 2025-04-17 01:49:19 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:19.971210 | orchestrator | 2025-04-17 01:49:19 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:19.971933 | orchestrator | 2025-04-17 01:49:19 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:23.021594 | orchestrator | 2025-04-17 01:49:23 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:23.024350 | orchestrator | 2025-04-17 01:49:23 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:23.024499 | orchestrator | 2025-04-17 01:49:23 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:26.070821 | orchestrator | 2025-04-17 01:49:23 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:26.070953 | orchestrator | 2025-04-17 01:49:23 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:26.070992 | orchestrator | 2025-04-17 01:49:26 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:26.072616 | orchestrator | 2025-04-17 01:49:26 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:26.074114 | orchestrator | 2025-04-17 01:49:26 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:26.075973 | orchestrator | 2025-04-17 01:49:26 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:29.131712 | orchestrator | 2025-04-17 01:49:26 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:29.131894 | orchestrator | 2025-04-17 01:49:29 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:29.132817 | orchestrator | 2025-04-17 01:49:29 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:29.134300 | orchestrator | 2025-04-17 01:49:29 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:29.135937 | orchestrator | 2025-04-17 01:49:29 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:32.171672 | orchestrator | 2025-04-17 01:49:29 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:32.171847 | orchestrator | 2025-04-17 01:49:32 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:32.181051 | orchestrator | 2025-04-17 01:49:32 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:32.184487 | orchestrator | 2025-04-17 01:49:32 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:32.187544 | orchestrator | 2025-04-17 01:49:32 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:35.243048 | orchestrator | 2025-04-17 01:49:32 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:35.243161 | orchestrator | 2025-04-17 01:49:35 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:35.244518 | orchestrator | 2025-04-17 01:49:35 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:35.245620 | orchestrator | 2025-04-17 01:49:35 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:35.247328 | orchestrator | 2025-04-17 01:49:35 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:38.287169 | orchestrator | 2025-04-17 01:49:35 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:38.287383 | orchestrator | 2025-04-17 01:49:38 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:38.287685 | orchestrator | 2025-04-17 01:49:38 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:38.288286 | orchestrator | 2025-04-17 01:49:38 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:38.294781 | orchestrator | 2025-04-17 01:49:38 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:38.297697 | orchestrator | 2025-04-17 01:49:38 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:41.338463 | orchestrator | 2025-04-17 01:49:41 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:41.338839 | orchestrator | 2025-04-17 01:49:41 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:41.339752 | orchestrator | 2025-04-17 01:49:41 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:41.340674 | orchestrator | 2025-04-17 01:49:41 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:41.340930 | orchestrator | 2025-04-17 01:49:41 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:44.402278 | orchestrator | 2025-04-17 01:49:44 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:44.404035 | orchestrator | 2025-04-17 01:49:44 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:44.406116 | orchestrator | 2025-04-17 01:49:44 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:44.407472 | orchestrator | 2025-04-17 01:49:44 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:47.445786 | orchestrator | 2025-04-17 01:49:44 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:47.445947 | orchestrator | 2025-04-17 01:49:47 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:47.446139 | orchestrator | 2025-04-17 01:49:47 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:47.446478 | orchestrator | 2025-04-17 01:49:47 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:47.446956 | orchestrator | 2025-04-17 01:49:47 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:50.491572 | orchestrator | 2025-04-17 01:49:47 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:50.491760 | orchestrator | 2025-04-17 01:49:50 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:50.491855 | orchestrator | 2025-04-17 01:49:50 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:50.492494 | orchestrator | 2025-04-17 01:49:50 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:50.494275 | orchestrator | 2025-04-17 01:49:50 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:53.537475 | orchestrator | 2025-04-17 01:49:50 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:53.537692 | orchestrator | 2025-04-17 01:49:53 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:56.573151 | orchestrator | 2025-04-17 01:49:53 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:56.573328 | orchestrator | 2025-04-17 01:49:53 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state STARTED 2025-04-17 01:49:56.573349 | orchestrator | 2025-04-17 01:49:53 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:56.573366 | orchestrator | 2025-04-17 01:49:53 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:56.573403 | orchestrator | 2025-04-17 01:49:56 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:49:56.575302 | orchestrator | 2025-04-17 01:49:56 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:49:56.577927 | orchestrator | 2025-04-17 01:49:56 | INFO  | Task ba7138ae-b021-4a6b-8f0f-c760e6c91a8c is in state SUCCESS 2025-04-17 01:49:56.579503 | orchestrator | 2025-04-17 01:49:56.579548 | orchestrator | 2025-04-17 01:49:56.579563 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:49:56.579580 | orchestrator | 2025-04-17 01:49:56.579604 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:49:56.579619 | orchestrator | Thursday 17 April 2025 01:47:45 +0000 (0:00:00.182) 0:00:00.182 ******** 2025-04-17 01:49:56.579633 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.579649 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.579663 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.579677 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:49:56.579690 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:49:56.579712 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:49:56.579727 | orchestrator | 2025-04-17 01:49:56.579741 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:49:56.579755 | orchestrator | Thursday 17 April 2025 01:47:46 +0000 (0:00:00.521) 0:00:00.703 ******** 2025-04-17 01:49:56.579769 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-17 01:49:56.579783 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-17 01:49:56.579797 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-17 01:49:56.579811 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-17 01:49:56.579824 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-17 01:49:56.579838 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-17 01:49:56.579852 | orchestrator | 2025-04-17 01:49:56.579866 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-17 01:49:56.579880 | orchestrator | 2025-04-17 01:49:56.579938 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-17 01:49:56.579955 | orchestrator | Thursday 17 April 2025 01:47:47 +0000 (0:00:01.023) 0:00:01.727 ******** 2025-04-17 01:49:56.579970 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:49:56.580090 | orchestrator | 2025-04-17 01:49:56.580107 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-17 01:49:56.580123 | orchestrator | Thursday 17 April 2025 01:47:48 +0000 (0:00:01.587) 0:00:03.314 ******** 2025-04-17 01:49:56.580140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580381 | orchestrator | 2025-04-17 01:49:56.580396 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-17 01:49:56.580443 | orchestrator | Thursday 17 April 2025 01:47:50 +0000 (0:00:01.434) 0:00:04.748 ******** 2025-04-17 01:49:56.580469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580484 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580564 | orchestrator | 2025-04-17 01:49:56.580578 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-17 01:49:56.580592 | orchestrator | Thursday 17 April 2025 01:47:52 +0000 (0:00:02.233) 0:00:06.982 ******** 2025-04-17 01:49:56.580606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580715 | orchestrator | 2025-04-17 01:49:56.580729 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-17 01:49:56.580743 | orchestrator | Thursday 17 April 2025 01:47:54 +0000 (0:00:01.560) 0:00:08.543 ******** 2025-04-17 01:49:56.580757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580855 | orchestrator | 2025-04-17 01:49:56.580870 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-17 01:49:56.580883 | orchestrator | Thursday 17 April 2025 01:47:55 +0000 (0:00:01.686) 0:00:10.230 ******** 2025-04-17 01:49:56.580897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580975 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.580989 | orchestrator | 2025-04-17 01:49:56.581003 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-17 01:49:56.581017 | orchestrator | Thursday 17 April 2025 01:47:57 +0000 (0:00:01.286) 0:00:11.516 ******** 2025-04-17 01:49:56.581031 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.581046 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.581060 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.581074 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:49:56.581088 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:49:56.581101 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:49:56.581116 | orchestrator | 2025-04-17 01:49:56.581129 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-17 01:49:56.581143 | orchestrator | Thursday 17 April 2025 01:47:59 +0000 (0:00:02.641) 0:00:14.158 ******** 2025-04-17 01:49:56.581157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-17 01:49:56.581171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-17 01:49:56.581185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-17 01:49:56.581224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-17 01:49:56.581238 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-17 01:49:56.581252 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-17 01:49:56.581266 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-17 01:49:56.581289 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-17 01:49:56.581303 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-17 01:49:56.581317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-17 01:49:56.581330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-17 01:49:56.581351 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-17 01:49:56.581365 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-17 01:49:56.581382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-17 01:49:56.581396 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-17 01:49:56.581410 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-17 01:49:56.581424 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-17 01:49:56.581438 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-17 01:49:56.581452 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-17 01:49:56.581466 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-17 01:49:56.581480 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-17 01:49:56.581494 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-17 01:49:56.581507 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-17 01:49:56.581521 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-17 01:49:56.581535 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-17 01:49:56.581549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-17 01:49:56.581563 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-17 01:49:56.581576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-17 01:49:56.581590 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-17 01:49:56.581604 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-17 01:49:56.581617 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-17 01:49:56.581631 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-17 01:49:56.581645 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-17 01:49:56.581658 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-17 01:49:56.581672 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-17 01:49:56.581686 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-17 01:49:56.581700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-17 01:49:56.581721 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-17 01:49:56.581735 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-17 01:49:56.581749 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-17 01:49:56.581769 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-17 01:49:56.581783 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-17 01:49:56.581797 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-17 01:49:56.581811 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-17 01:49:56.581825 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-17 01:49:56.581839 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-17 01:49:56.581853 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-17 01:49:56.581867 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-17 01:49:56.581881 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-17 01:49:56.581895 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-17 01:49:56.581909 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-17 01:49:56.581923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-17 01:49:56.581937 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-17 01:49:56.581951 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-17 01:49:56.581964 | orchestrator | 2025-04-17 01:49:56.581979 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-17 01:49:56.581992 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:18.484) 0:00:32.642 ******** 2025-04-17 01:49:56.582006 | orchestrator | 2025-04-17 01:49:56.582075 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-17 01:49:56.582092 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.053) 0:00:32.696 ******** 2025-04-17 01:49:56.582106 | orchestrator | 2025-04-17 01:49:56.582120 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-17 01:49:56.582133 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.290) 0:00:32.986 ******** 2025-04-17 01:49:56.582147 | orchestrator | 2025-04-17 01:49:56.582161 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-17 01:49:56.582174 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.095) 0:00:33.082 ******** 2025-04-17 01:49:56.582188 | orchestrator | 2025-04-17 01:49:56.582218 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-17 01:49:56.582232 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.071) 0:00:33.154 ******** 2025-04-17 01:49:56.582246 | orchestrator | 2025-04-17 01:49:56.582260 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-17 01:49:56.582281 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.115) 0:00:33.269 ******** 2025-04-17 01:49:56.582295 | orchestrator | 2025-04-17 01:49:56.582309 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-17 01:49:56.582323 | orchestrator | Thursday 17 April 2025 01:48:19 +0000 (0:00:00.555) 0:00:33.824 ******** 2025-04-17 01:49:56.582337 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.582350 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:49:56.582364 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.582378 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.582391 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:49:56.582405 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:49:56.582418 | orchestrator | 2025-04-17 01:49:56.582432 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-17 01:49:56.582446 | orchestrator | Thursday 17 April 2025 01:48:21 +0000 (0:00:01.849) 0:00:35.674 ******** 2025-04-17 01:49:56.582460 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.582473 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.582487 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:49:56.582500 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.582514 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:49:56.582528 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:49:56.582542 | orchestrator | 2025-04-17 01:49:56.582556 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-17 01:49:56.582570 | orchestrator | 2025-04-17 01:49:56.582584 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-17 01:49:56.582597 | orchestrator | Thursday 17 April 2025 01:48:39 +0000 (0:00:18.017) 0:00:53.691 ******** 2025-04-17 01:49:56.582611 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:49:56.582624 | orchestrator | 2025-04-17 01:49:56.582638 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-17 01:49:56.582652 | orchestrator | Thursday 17 April 2025 01:48:39 +0000 (0:00:00.767) 0:00:54.458 ******** 2025-04-17 01:49:56.582665 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:49:56.582679 | orchestrator | 2025-04-17 01:49:56.582700 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-17 01:49:56.582714 | orchestrator | Thursday 17 April 2025 01:48:41 +0000 (0:00:01.115) 0:00:55.574 ******** 2025-04-17 01:49:56.582728 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.582742 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.582756 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.582769 | orchestrator | 2025-04-17 01:49:56.582783 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-17 01:49:56.582803 | orchestrator | Thursday 17 April 2025 01:48:42 +0000 (0:00:00.943) 0:00:56.517 ******** 2025-04-17 01:49:56.582817 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.582831 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.582845 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.582858 | orchestrator | 2025-04-17 01:49:56.582872 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-17 01:49:56.582886 | orchestrator | Thursday 17 April 2025 01:48:42 +0000 (0:00:00.357) 0:00:56.874 ******** 2025-04-17 01:49:56.582899 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.582913 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.582926 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.582940 | orchestrator | 2025-04-17 01:49:56.582954 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-17 01:49:56.582967 | orchestrator | Thursday 17 April 2025 01:48:42 +0000 (0:00:00.323) 0:00:57.198 ******** 2025-04-17 01:49:56.582981 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.582995 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.583008 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.583030 | orchestrator | 2025-04-17 01:49:56.583044 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-17 01:49:56.583058 | orchestrator | Thursday 17 April 2025 01:48:43 +0000 (0:00:00.326) 0:00:57.525 ******** 2025-04-17 01:49:56.583071 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.583085 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.583098 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.583111 | orchestrator | 2025-04-17 01:49:56.583125 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-17 01:49:56.583139 | orchestrator | Thursday 17 April 2025 01:48:43 +0000 (0:00:00.260) 0:00:57.786 ******** 2025-04-17 01:49:56.583152 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583166 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583180 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583218 | orchestrator | 2025-04-17 01:49:56.583233 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-17 01:49:56.583247 | orchestrator | Thursday 17 April 2025 01:48:43 +0000 (0:00:00.358) 0:00:58.144 ******** 2025-04-17 01:49:56.583261 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583274 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583295 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583308 | orchestrator | 2025-04-17 01:49:56.583322 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-17 01:49:56.583336 | orchestrator | Thursday 17 April 2025 01:48:43 +0000 (0:00:00.271) 0:00:58.415 ******** 2025-04-17 01:49:56.583350 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583363 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583377 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583391 | orchestrator | 2025-04-17 01:49:56.583405 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-17 01:49:56.583419 | orchestrator | Thursday 17 April 2025 01:48:44 +0000 (0:00:00.314) 0:00:58.730 ******** 2025-04-17 01:49:56.583433 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583446 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583460 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583474 | orchestrator | 2025-04-17 01:49:56.583488 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-17 01:49:56.583501 | orchestrator | Thursday 17 April 2025 01:48:44 +0000 (0:00:00.233) 0:00:58.963 ******** 2025-04-17 01:49:56.583515 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583528 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583542 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583555 | orchestrator | 2025-04-17 01:49:56.583569 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-17 01:49:56.583583 | orchestrator | Thursday 17 April 2025 01:48:44 +0000 (0:00:00.346) 0:00:59.309 ******** 2025-04-17 01:49:56.583597 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583610 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583624 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583637 | orchestrator | 2025-04-17 01:49:56.583651 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-17 01:49:56.583665 | orchestrator | Thursday 17 April 2025 01:48:45 +0000 (0:00:00.282) 0:00:59.592 ******** 2025-04-17 01:49:56.583678 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583692 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583706 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583719 | orchestrator | 2025-04-17 01:49:56.583733 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-17 01:49:56.583746 | orchestrator | Thursday 17 April 2025 01:48:45 +0000 (0:00:00.298) 0:00:59.890 ******** 2025-04-17 01:49:56.583760 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583773 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583787 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583800 | orchestrator | 2025-04-17 01:49:56.583831 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-17 01:49:56.583845 | orchestrator | Thursday 17 April 2025 01:48:45 +0000 (0:00:00.226) 0:01:00.117 ******** 2025-04-17 01:49:56.583859 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583872 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583886 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583899 | orchestrator | 2025-04-17 01:49:56.583913 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-17 01:49:56.583927 | orchestrator | Thursday 17 April 2025 01:48:45 +0000 (0:00:00.316) 0:01:00.433 ******** 2025-04-17 01:49:56.583940 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.583954 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.583967 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.583981 | orchestrator | 2025-04-17 01:49:56.584002 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-17 01:49:56.584016 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:00.345) 0:01:00.778 ******** 2025-04-17 01:49:56.584029 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584043 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584056 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584070 | orchestrator | 2025-04-17 01:49:56.584083 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-17 01:49:56.584097 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:00.326) 0:01:01.105 ******** 2025-04-17 01:49:56.584110 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584124 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584137 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584151 | orchestrator | 2025-04-17 01:49:56.584164 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-17 01:49:56.584183 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:00.223) 0:01:01.328 ******** 2025-04-17 01:49:56.584215 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:49:56.584230 | orchestrator | 2025-04-17 01:49:56.584244 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-17 01:49:56.584257 | orchestrator | Thursday 17 April 2025 01:48:47 +0000 (0:00:00.610) 0:01:01.939 ******** 2025-04-17 01:49:56.584271 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.584285 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.584298 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.584312 | orchestrator | 2025-04-17 01:49:56.584326 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-17 01:49:56.584339 | orchestrator | Thursday 17 April 2025 01:48:47 +0000 (0:00:00.477) 0:01:02.416 ******** 2025-04-17 01:49:56.584353 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.584366 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.584380 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.584393 | orchestrator | 2025-04-17 01:49:56.584407 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-17 01:49:56.584420 | orchestrator | Thursday 17 April 2025 01:48:48 +0000 (0:00:00.810) 0:01:03.226 ******** 2025-04-17 01:49:56.584434 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584447 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584460 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584474 | orchestrator | 2025-04-17 01:49:56.584487 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-17 01:49:56.584501 | orchestrator | Thursday 17 April 2025 01:48:49 +0000 (0:00:00.423) 0:01:03.650 ******** 2025-04-17 01:49:56.584514 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584527 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584541 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584554 | orchestrator | 2025-04-17 01:49:56.584568 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-17 01:49:56.584589 | orchestrator | Thursday 17 April 2025 01:48:49 +0000 (0:00:00.409) 0:01:04.060 ******** 2025-04-17 01:49:56.584603 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584616 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584629 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584643 | orchestrator | 2025-04-17 01:49:56.584656 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-17 01:49:56.584670 | orchestrator | Thursday 17 April 2025 01:48:49 +0000 (0:00:00.302) 0:01:04.362 ******** 2025-04-17 01:49:56.584684 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584697 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584710 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584724 | orchestrator | 2025-04-17 01:49:56.584737 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-17 01:49:56.584751 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:00.377) 0:01:04.740 ******** 2025-04-17 01:49:56.584764 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584778 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584797 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584810 | orchestrator | 2025-04-17 01:49:56.584824 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-17 01:49:56.584837 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:00.326) 0:01:05.067 ******** 2025-04-17 01:49:56.584851 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.584865 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.584878 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.584891 | orchestrator | 2025-04-17 01:49:56.584905 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-17 01:49:56.584919 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:00.376) 0:01:05.443 ******** 2025-04-17 01:49:56.584933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.584948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.584969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.584985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585082 | orchestrator | 2025-04-17 01:49:56.585096 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-17 01:49:56.585110 | orchestrator | Thursday 17 April 2025 01:48:52 +0000 (0:00:01.394) 0:01:06.837 ******** 2025-04-17 01:49:56.585124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585288 | orchestrator | 2025-04-17 01:49:56.585302 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-17 01:49:56.585316 | orchestrator | Thursday 17 April 2025 01:48:56 +0000 (0:00:03.810) 0:01:10.647 ******** 2025-04-17 01:49:56.585330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.585539 | orchestrator | 2025-04-17 01:49:56.585553 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-17 01:49:56.585567 | orchestrator | Thursday 17 April 2025 01:48:58 +0000 (0:00:02.700) 0:01:13.348 ******** 2025-04-17 01:49:56.585581 | orchestrator | 2025-04-17 01:49:56.585595 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-17 01:49:56.585608 | orchestrator | Thursday 17 April 2025 01:48:58 +0000 (0:00:00.075) 0:01:13.423 ******** 2025-04-17 01:49:56.585622 | orchestrator | 2025-04-17 01:49:56.585636 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-17 01:49:56.585649 | orchestrator | Thursday 17 April 2025 01:48:59 +0000 (0:00:00.082) 0:01:13.505 ******** 2025-04-17 01:49:56.585663 | orchestrator | 2025-04-17 01:49:56.585676 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-17 01:49:56.585690 | orchestrator | Thursday 17 April 2025 01:48:59 +0000 (0:00:00.421) 0:01:13.926 ******** 2025-04-17 01:49:56.585703 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.585717 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.585731 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.585744 | orchestrator | 2025-04-17 01:49:56.585758 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-17 01:49:56.585777 | orchestrator | Thursday 17 April 2025 01:49:06 +0000 (0:00:06.646) 0:01:20.573 ******** 2025-04-17 01:49:56.585791 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.585805 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.585818 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.585832 | orchestrator | 2025-04-17 01:49:56.585846 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-17 01:49:56.585859 | orchestrator | Thursday 17 April 2025 01:49:13 +0000 (0:00:07.552) 0:01:28.125 ******** 2025-04-17 01:49:56.585873 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.585886 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.585900 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.585914 | orchestrator | 2025-04-17 01:49:56.585935 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-17 01:49:56.585949 | orchestrator | Thursday 17 April 2025 01:49:16 +0000 (0:00:02.702) 0:01:30.827 ******** 2025-04-17 01:49:56.585963 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.585977 | orchestrator | 2025-04-17 01:49:56.585990 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-17 01:49:56.586004 | orchestrator | Thursday 17 April 2025 01:49:16 +0000 (0:00:00.117) 0:01:30.945 ******** 2025-04-17 01:49:56.586045 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.586061 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.586075 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.586088 | orchestrator | 2025-04-17 01:49:56.586110 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-17 01:49:56.586124 | orchestrator | Thursday 17 April 2025 01:49:17 +0000 (0:00:00.974) 0:01:31.920 ******** 2025-04-17 01:49:56.586138 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.586152 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.586166 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.586283 | orchestrator | 2025-04-17 01:49:56.586304 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-17 01:49:56.586318 | orchestrator | Thursday 17 April 2025 01:49:18 +0000 (0:00:00.622) 0:01:32.542 ******** 2025-04-17 01:49:56.586332 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.586346 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.586360 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.586373 | orchestrator | 2025-04-17 01:49:56.586387 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-17 01:49:56.586401 | orchestrator | Thursday 17 April 2025 01:49:18 +0000 (0:00:00.918) 0:01:33.461 ******** 2025-04-17 01:49:56.586415 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.586428 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.586442 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.586455 | orchestrator | 2025-04-17 01:49:56.586469 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-17 01:49:56.586482 | orchestrator | Thursday 17 April 2025 01:49:19 +0000 (0:00:00.587) 0:01:34.048 ******** 2025-04-17 01:49:56.586496 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.586510 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.586523 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.586537 | orchestrator | 2025-04-17 01:49:56.586551 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-17 01:49:56.586564 | orchestrator | Thursday 17 April 2025 01:49:20 +0000 (0:00:00.979) 0:01:35.027 ******** 2025-04-17 01:49:56.586578 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.586591 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.586604 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.586616 | orchestrator | 2025-04-17 01:49:56.586628 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-17 01:49:56.586640 | orchestrator | Thursday 17 April 2025 01:49:21 +0000 (0:00:00.685) 0:01:35.712 ******** 2025-04-17 01:49:56.586652 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.586664 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.586676 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.586688 | orchestrator | 2025-04-17 01:49:56.586700 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-17 01:49:56.586713 | orchestrator | Thursday 17 April 2025 01:49:21 +0000 (0:00:00.401) 0:01:36.114 ******** 2025-04-17 01:49:56.586725 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586759 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586772 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586785 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586797 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586817 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586830 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586843 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586855 | orchestrator | 2025-04-17 01:49:56.586868 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-17 01:49:56.586880 | orchestrator | Thursday 17 April 2025 01:49:23 +0000 (0:00:01.513) 0:01:37.627 ******** 2025-04-17 01:49:56.586893 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586905 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586925 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586943 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.586991 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587029 | orchestrator | 2025-04-17 01:49:56.587041 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-17 01:49:56.587054 | orchestrator | Thursday 17 April 2025 01:49:27 +0000 (0:00:04.621) 0:01:42.248 ******** 2025-04-17 01:49:56.587066 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587089 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587102 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587115 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587133 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587145 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587162 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587182 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587236 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-17 01:49:56.587251 | orchestrator | 2025-04-17 01:49:56.587264 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-17 01:49:56.587276 | orchestrator | Thursday 17 April 2025 01:49:30 +0000 (0:00:02.956) 0:01:45.204 ******** 2025-04-17 01:49:56.587289 | orchestrator | 2025-04-17 01:49:56.587301 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-17 01:49:56.587313 | orchestrator | Thursday 17 April 2025 01:49:30 +0000 (0:00:00.215) 0:01:45.419 ******** 2025-04-17 01:49:56.587326 | orchestrator | 2025-04-17 01:49:56.587338 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-17 01:49:56.587357 | orchestrator | Thursday 17 April 2025 01:49:31 +0000 (0:00:00.070) 0:01:45.490 ******** 2025-04-17 01:49:56.587369 | orchestrator | 2025-04-17 01:49:56.587381 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-17 01:49:56.587394 | orchestrator | Thursday 17 April 2025 01:49:31 +0000 (0:00:00.088) 0:01:45.579 ******** 2025-04-17 01:49:56.587405 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.587418 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.587430 | orchestrator | 2025-04-17 01:49:56.587443 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-17 01:49:56.587455 | orchestrator | Thursday 17 April 2025 01:49:37 +0000 (0:00:06.494) 0:01:52.073 ******** 2025-04-17 01:49:56.587467 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.587479 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.587492 | orchestrator | 2025-04-17 01:49:56.587504 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-17 01:49:56.587517 | orchestrator | Thursday 17 April 2025 01:49:43 +0000 (0:00:06.375) 0:01:58.449 ******** 2025-04-17 01:49:56.587529 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:49:56.587541 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:49:56.587553 | orchestrator | 2025-04-17 01:49:56.587565 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-17 01:49:56.587578 | orchestrator | Thursday 17 April 2025 01:49:50 +0000 (0:00:06.393) 0:02:04.843 ******** 2025-04-17 01:49:56.587590 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:49:56.587602 | orchestrator | 2025-04-17 01:49:56.587614 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-17 01:49:56.587626 | orchestrator | Thursday 17 April 2025 01:49:50 +0000 (0:00:00.216) 0:02:05.060 ******** 2025-04-17 01:49:56.587638 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.587650 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.587662 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.587674 | orchestrator | 2025-04-17 01:49:56.587686 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-17 01:49:56.587699 | orchestrator | Thursday 17 April 2025 01:49:51 +0000 (0:00:00.681) 0:02:05.742 ******** 2025-04-17 01:49:56.587711 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.587723 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.587735 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.587747 | orchestrator | 2025-04-17 01:49:56.587759 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-17 01:49:56.587768 | orchestrator | Thursday 17 April 2025 01:49:51 +0000 (0:00:00.629) 0:02:06.371 ******** 2025-04-17 01:49:56.587778 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.587788 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.587798 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.587809 | orchestrator | 2025-04-17 01:49:56.587819 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-17 01:49:56.587829 | orchestrator | Thursday 17 April 2025 01:49:52 +0000 (0:00:00.843) 0:02:07.215 ******** 2025-04-17 01:49:56.587839 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:49:56.587857 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:49:56.587868 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:49:56.587879 | orchestrator | 2025-04-17 01:49:56.587888 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-17 01:49:56.587903 | orchestrator | Thursday 17 April 2025 01:49:53 +0000 (0:00:00.722) 0:02:07.937 ******** 2025-04-17 01:49:56.587913 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.587923 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.587933 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.587943 | orchestrator | 2025-04-17 01:49:56.587953 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-17 01:49:56.587963 | orchestrator | Thursday 17 April 2025 01:49:54 +0000 (0:00:00.697) 0:02:08.634 ******** 2025-04-17 01:49:56.587979 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:49:56.587989 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:49:56.587999 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:49:56.588009 | orchestrator | 2025-04-17 01:49:56.588019 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:49:56.588029 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-17 01:49:56.588040 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-17 01:49:56.588055 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-17 01:49:59.622461 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:49:59.622647 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:49:59.622668 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-17 01:49:59.622683 | orchestrator | 2025-04-17 01:49:59.622698 | orchestrator | 2025-04-17 01:49:59.622713 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:49:59.622737 | orchestrator | Thursday 17 April 2025 01:49:55 +0000 (0:00:01.222) 0:02:09.857 ******** 2025-04-17 01:49:59.622763 | orchestrator | =============================================================================== 2025-04-17 01:49:59.622787 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.48s 2025-04-17 01:49:59.622814 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 18.02s 2025-04-17 01:49:59.622840 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.93s 2025-04-17 01:49:59.622862 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.14s 2025-04-17 01:49:59.622876 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.10s 2025-04-17 01:49:59.622890 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.62s 2025-04-17 01:49:59.622904 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.81s 2025-04-17 01:49:59.622918 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.96s 2025-04-17 01:49:59.622942 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.70s 2025-04-17 01:49:59.622957 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.64s 2025-04-17 01:49:59.622972 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.23s 2025-04-17 01:49:59.622989 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.85s 2025-04-17 01:49:59.623007 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.69s 2025-04-17 01:49:59.623022 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.59s 2025-04-17 01:49:59.623037 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.56s 2025-04-17 01:49:59.623053 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2025-04-17 01:49:59.623068 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.43s 2025-04-17 01:49:59.623082 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-04-17 01:49:59.623096 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.29s 2025-04-17 01:49:59.623110 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.22s 2025-04-17 01:49:59.623125 | orchestrator | 2025-04-17 01:49:56 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:49:59.623177 | orchestrator | 2025-04-17 01:49:56 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:49:59.623266 | orchestrator | 2025-04-17 01:49:59 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:02.661974 | orchestrator | 2025-04-17 01:49:59 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:02.662152 | orchestrator | 2025-04-17 01:49:59 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:02.662173 | orchestrator | 2025-04-17 01:49:59 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:02.662261 | orchestrator | 2025-04-17 01:50:02 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:02.662479 | orchestrator | 2025-04-17 01:50:02 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:02.662652 | orchestrator | 2025-04-17 01:50:02 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:05.706954 | orchestrator | 2025-04-17 01:50:02 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:05.707126 | orchestrator | 2025-04-17 01:50:05 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:05.707889 | orchestrator | 2025-04-17 01:50:05 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:05.707941 | orchestrator | 2025-04-17 01:50:05 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:08.744583 | orchestrator | 2025-04-17 01:50:05 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:08.745679 | orchestrator | 2025-04-17 01:50:08 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:11.798473 | orchestrator | 2025-04-17 01:50:08 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:11.798604 | orchestrator | 2025-04-17 01:50:08 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:11.798617 | orchestrator | 2025-04-17 01:50:08 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:11.798645 | orchestrator | 2025-04-17 01:50:11 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:11.799782 | orchestrator | 2025-04-17 01:50:11 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:11.801844 | orchestrator | 2025-04-17 01:50:11 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:14.854638 | orchestrator | 2025-04-17 01:50:11 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:14.854862 | orchestrator | 2025-04-17 01:50:14 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:14.855156 | orchestrator | 2025-04-17 01:50:14 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:14.855971 | orchestrator | 2025-04-17 01:50:14 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:17.906198 | orchestrator | 2025-04-17 01:50:14 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:17.906351 | orchestrator | 2025-04-17 01:50:17 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:20.948075 | orchestrator | 2025-04-17 01:50:17 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:20.948274 | orchestrator | 2025-04-17 01:50:17 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:20.948296 | orchestrator | 2025-04-17 01:50:17 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:20.948367 | orchestrator | 2025-04-17 01:50:20 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:20.949320 | orchestrator | 2025-04-17 01:50:20 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:20.951053 | orchestrator | 2025-04-17 01:50:20 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:20.951958 | orchestrator | 2025-04-17 01:50:20 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:23.993946 | orchestrator | 2025-04-17 01:50:23 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:23.995769 | orchestrator | 2025-04-17 01:50:23 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:23.996762 | orchestrator | 2025-04-17 01:50:23 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:23.997054 | orchestrator | 2025-04-17 01:50:23 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:27.039725 | orchestrator | 2025-04-17 01:50:27 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:27.040250 | orchestrator | 2025-04-17 01:50:27 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:27.040950 | orchestrator | 2025-04-17 01:50:27 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:30.080241 | orchestrator | 2025-04-17 01:50:27 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:30.080376 | orchestrator | 2025-04-17 01:50:30 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:30.082256 | orchestrator | 2025-04-17 01:50:30 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:30.083831 | orchestrator | 2025-04-17 01:50:30 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:33.130963 | orchestrator | 2025-04-17 01:50:30 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:33.131116 | orchestrator | 2025-04-17 01:50:33 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:33.132749 | orchestrator | 2025-04-17 01:50:33 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:33.135104 | orchestrator | 2025-04-17 01:50:33 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:33.135441 | orchestrator | 2025-04-17 01:50:33 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:36.189516 | orchestrator | 2025-04-17 01:50:36 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:36.192668 | orchestrator | 2025-04-17 01:50:36 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:36.195317 | orchestrator | 2025-04-17 01:50:36 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:39.240441 | orchestrator | 2025-04-17 01:50:36 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:39.240601 | orchestrator | 2025-04-17 01:50:39 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:39.240811 | orchestrator | 2025-04-17 01:50:39 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:39.243334 | orchestrator | 2025-04-17 01:50:39 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:42.291531 | orchestrator | 2025-04-17 01:50:39 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:42.291850 | orchestrator | 2025-04-17 01:50:42 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:42.292966 | orchestrator | 2025-04-17 01:50:42 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:42.295447 | orchestrator | 2025-04-17 01:50:42 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:45.358870 | orchestrator | 2025-04-17 01:50:42 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:45.359048 | orchestrator | 2025-04-17 01:50:45 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:45.360072 | orchestrator | 2025-04-17 01:50:45 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:45.361657 | orchestrator | 2025-04-17 01:50:45 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:48.410612 | orchestrator | 2025-04-17 01:50:45 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:48.410726 | orchestrator | 2025-04-17 01:50:48 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:51.453271 | orchestrator | 2025-04-17 01:50:48 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:51.453385 | orchestrator | 2025-04-17 01:50:48 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:51.453397 | orchestrator | 2025-04-17 01:50:48 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:51.453420 | orchestrator | 2025-04-17 01:50:51 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:51.458488 | orchestrator | 2025-04-17 01:50:51 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:54.513725 | orchestrator | 2025-04-17 01:50:51 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:54.513878 | orchestrator | 2025-04-17 01:50:51 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:54.513914 | orchestrator | 2025-04-17 01:50:54 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:54.515520 | orchestrator | 2025-04-17 01:50:54 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:54.518446 | orchestrator | 2025-04-17 01:50:54 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:50:57.576384 | orchestrator | 2025-04-17 01:50:54 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:50:57.576566 | orchestrator | 2025-04-17 01:50:57 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:50:57.578376 | orchestrator | 2025-04-17 01:50:57 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:50:57.579959 | orchestrator | 2025-04-17 01:50:57 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:00.650668 | orchestrator | 2025-04-17 01:50:57 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:00.650793 | orchestrator | 2025-04-17 01:51:00 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:00.655781 | orchestrator | 2025-04-17 01:51:00 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:00.655828 | orchestrator | 2025-04-17 01:51:00 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:03.683001 | orchestrator | 2025-04-17 01:51:00 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:03.683192 | orchestrator | 2025-04-17 01:51:03 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:03.684806 | orchestrator | 2025-04-17 01:51:03 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:06.721404 | orchestrator | 2025-04-17 01:51:03 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:06.721576 | orchestrator | 2025-04-17 01:51:03 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:06.722595 | orchestrator | 2025-04-17 01:51:06 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:06.723915 | orchestrator | 2025-04-17 01:51:06 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:06.723964 | orchestrator | 2025-04-17 01:51:06 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:09.771708 | orchestrator | 2025-04-17 01:51:06 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:09.771811 | orchestrator | 2025-04-17 01:51:09 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:09.772823 | orchestrator | 2025-04-17 01:51:09 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:09.773989 | orchestrator | 2025-04-17 01:51:09 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:12.836514 | orchestrator | 2025-04-17 01:51:09 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:12.836712 | orchestrator | 2025-04-17 01:51:12 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:12.837492 | orchestrator | 2025-04-17 01:51:12 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:12.838987 | orchestrator | 2025-04-17 01:51:12 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:12.839124 | orchestrator | 2025-04-17 01:51:12 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:15.883105 | orchestrator | 2025-04-17 01:51:15 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:15.886416 | orchestrator | 2025-04-17 01:51:15 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:15.887432 | orchestrator | 2025-04-17 01:51:15 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:18.932294 | orchestrator | 2025-04-17 01:51:15 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:18.932515 | orchestrator | 2025-04-17 01:51:18 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:21.973987 | orchestrator | 2025-04-17 01:51:18 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:21.974297 | orchestrator | 2025-04-17 01:51:18 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:21.974320 | orchestrator | 2025-04-17 01:51:18 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:21.974356 | orchestrator | 2025-04-17 01:51:21 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:21.976308 | orchestrator | 2025-04-17 01:51:21 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:21.979922 | orchestrator | 2025-04-17 01:51:21 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:25.027223 | orchestrator | 2025-04-17 01:51:21 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:25.027383 | orchestrator | 2025-04-17 01:51:25 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:25.029796 | orchestrator | 2025-04-17 01:51:25 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:25.029875 | orchestrator | 2025-04-17 01:51:25 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:28.076162 | orchestrator | 2025-04-17 01:51:25 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:28.076308 | orchestrator | 2025-04-17 01:51:28 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:28.079730 | orchestrator | 2025-04-17 01:51:28 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:28.081480 | orchestrator | 2025-04-17 01:51:28 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:31.126188 | orchestrator | 2025-04-17 01:51:28 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:31.126341 | orchestrator | 2025-04-17 01:51:31 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:34.174756 | orchestrator | 2025-04-17 01:51:31 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:34.174865 | orchestrator | 2025-04-17 01:51:31 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:34.174903 | orchestrator | 2025-04-17 01:51:31 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:34.174935 | orchestrator | 2025-04-17 01:51:34 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:34.175630 | orchestrator | 2025-04-17 01:51:34 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:34.176450 | orchestrator | 2025-04-17 01:51:34 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:37.214721 | orchestrator | 2025-04-17 01:51:34 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:37.214886 | orchestrator | 2025-04-17 01:51:37 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:37.215606 | orchestrator | 2025-04-17 01:51:37 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:37.219794 | orchestrator | 2025-04-17 01:51:37 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:40.274288 | orchestrator | 2025-04-17 01:51:37 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:40.274457 | orchestrator | 2025-04-17 01:51:40 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:40.279335 | orchestrator | 2025-04-17 01:51:40 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:40.281580 | orchestrator | 2025-04-17 01:51:40 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:40.281709 | orchestrator | 2025-04-17 01:51:40 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:43.325227 | orchestrator | 2025-04-17 01:51:43 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:43.325760 | orchestrator | 2025-04-17 01:51:43 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:43.325804 | orchestrator | 2025-04-17 01:51:43 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:46.367239 | orchestrator | 2025-04-17 01:51:43 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:46.367417 | orchestrator | 2025-04-17 01:51:46 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:46.367864 | orchestrator | 2025-04-17 01:51:46 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:46.367942 | orchestrator | 2025-04-17 01:51:46 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:49.422901 | orchestrator | 2025-04-17 01:51:46 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:49.423114 | orchestrator | 2025-04-17 01:51:49 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:49.424392 | orchestrator | 2025-04-17 01:51:49 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:49.426464 | orchestrator | 2025-04-17 01:51:49 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:52.477838 | orchestrator | 2025-04-17 01:51:49 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:52.478210 | orchestrator | 2025-04-17 01:51:52 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:52.485941 | orchestrator | 2025-04-17 01:51:52 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:52.491933 | orchestrator | 2025-04-17 01:51:52 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:52.495751 | orchestrator | 2025-04-17 01:51:52 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:55.542726 | orchestrator | 2025-04-17 01:51:55 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:55.546274 | orchestrator | 2025-04-17 01:51:55 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:55.548873 | orchestrator | 2025-04-17 01:51:55 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:51:58.600856 | orchestrator | 2025-04-17 01:51:55 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:51:58.601012 | orchestrator | 2025-04-17 01:51:58 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:51:58.601674 | orchestrator | 2025-04-17 01:51:58 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:51:58.602701 | orchestrator | 2025-04-17 01:51:58 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:01.651809 | orchestrator | 2025-04-17 01:51:58 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:01.651952 | orchestrator | 2025-04-17 01:52:01 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:01.652911 | orchestrator | 2025-04-17 01:52:01 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:01.654396 | orchestrator | 2025-04-17 01:52:01 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:04.695686 | orchestrator | 2025-04-17 01:52:01 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:04.695796 | orchestrator | 2025-04-17 01:52:04 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:04.698115 | orchestrator | 2025-04-17 01:52:04 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:04.699866 | orchestrator | 2025-04-17 01:52:04 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:04.700058 | orchestrator | 2025-04-17 01:52:04 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:07.738727 | orchestrator | 2025-04-17 01:52:07 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:07.738896 | orchestrator | 2025-04-17 01:52:07 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:07.740147 | orchestrator | 2025-04-17 01:52:07 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:10.785644 | orchestrator | 2025-04-17 01:52:07 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:10.785795 | orchestrator | 2025-04-17 01:52:10 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:10.787251 | orchestrator | 2025-04-17 01:52:10 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:10.788300 | orchestrator | 2025-04-17 01:52:10 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:13.830498 | orchestrator | 2025-04-17 01:52:10 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:13.830718 | orchestrator | 2025-04-17 01:52:13 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:13.831212 | orchestrator | 2025-04-17 01:52:13 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:13.831256 | orchestrator | 2025-04-17 01:52:13 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:16.874289 | orchestrator | 2025-04-17 01:52:13 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:16.874463 | orchestrator | 2025-04-17 01:52:16 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:16.874838 | orchestrator | 2025-04-17 01:52:16 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:16.876004 | orchestrator | 2025-04-17 01:52:16 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:19.924872 | orchestrator | 2025-04-17 01:52:16 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:19.925121 | orchestrator | 2025-04-17 01:52:19 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:19.927243 | orchestrator | 2025-04-17 01:52:19 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:19.928736 | orchestrator | 2025-04-17 01:52:19 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:19.928813 | orchestrator | 2025-04-17 01:52:19 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:22.985045 | orchestrator | 2025-04-17 01:52:22 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:22.986731 | orchestrator | 2025-04-17 01:52:22 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:22.988402 | orchestrator | 2025-04-17 01:52:22 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:26.041603 | orchestrator | 2025-04-17 01:52:22 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:26.041756 | orchestrator | 2025-04-17 01:52:26 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:26.041984 | orchestrator | 2025-04-17 01:52:26 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:26.042554 | orchestrator | 2025-04-17 01:52:26 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:29.091950 | orchestrator | 2025-04-17 01:52:26 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:29.092088 | orchestrator | 2025-04-17 01:52:29 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:29.092140 | orchestrator | 2025-04-17 01:52:29 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:29.093051 | orchestrator | 2025-04-17 01:52:29 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:29.093178 | orchestrator | 2025-04-17 01:52:29 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:32.141290 | orchestrator | 2025-04-17 01:52:32 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:32.144223 | orchestrator | 2025-04-17 01:52:32 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:32.149100 | orchestrator | 2025-04-17 01:52:32 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:35.190243 | orchestrator | 2025-04-17 01:52:32 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:35.190380 | orchestrator | 2025-04-17 01:52:35 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:35.191711 | orchestrator | 2025-04-17 01:52:35 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:35.192636 | orchestrator | 2025-04-17 01:52:35 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:35.193209 | orchestrator | 2025-04-17 01:52:35 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:38.237412 | orchestrator | 2025-04-17 01:52:38 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:38.238957 | orchestrator | 2025-04-17 01:52:38 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:38.240550 | orchestrator | 2025-04-17 01:52:38 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:38.240926 | orchestrator | 2025-04-17 01:52:38 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:41.294844 | orchestrator | 2025-04-17 01:52:41 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:41.295731 | orchestrator | 2025-04-17 01:52:41 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:41.298324 | orchestrator | 2025-04-17 01:52:41 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:44.348378 | orchestrator | 2025-04-17 01:52:41 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:44.348565 | orchestrator | 2025-04-17 01:52:44 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:44.350161 | orchestrator | 2025-04-17 01:52:44 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:44.351449 | orchestrator | 2025-04-17 01:52:44 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:47.399763 | orchestrator | 2025-04-17 01:52:44 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:47.399895 | orchestrator | 2025-04-17 01:52:47 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:47.401095 | orchestrator | 2025-04-17 01:52:47 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:47.402704 | orchestrator | 2025-04-17 01:52:47 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:47.402989 | orchestrator | 2025-04-17 01:52:47 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:50.453745 | orchestrator | 2025-04-17 01:52:50 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:50.455031 | orchestrator | 2025-04-17 01:52:50 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:50.456755 | orchestrator | 2025-04-17 01:52:50 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:53.507565 | orchestrator | 2025-04-17 01:52:50 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:53.507743 | orchestrator | 2025-04-17 01:52:53 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:53.508989 | orchestrator | 2025-04-17 01:52:53 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:53.512279 | orchestrator | 2025-04-17 01:52:53 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:56.558347 | orchestrator | 2025-04-17 01:52:53 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:56.558501 | orchestrator | 2025-04-17 01:52:56 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:56.560548 | orchestrator | 2025-04-17 01:52:56 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:56.560596 | orchestrator | 2025-04-17 01:52:56 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:52:59.594735 | orchestrator | 2025-04-17 01:52:56 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:52:59.594922 | orchestrator | 2025-04-17 01:52:59 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:52:59.595119 | orchestrator | 2025-04-17 01:52:59 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:52:59.596293 | orchestrator | 2025-04-17 01:52:59 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:02.654147 | orchestrator | 2025-04-17 01:52:59 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:02.654255 | orchestrator | 2025-04-17 01:53:02 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:02.656975 | orchestrator | 2025-04-17 01:53:02 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:02.657471 | orchestrator | 2025-04-17 01:53:02 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:02.658212 | orchestrator | 2025-04-17 01:53:02 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:05.717235 | orchestrator | 2025-04-17 01:53:05 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:05.718866 | orchestrator | 2025-04-17 01:53:05 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:05.719265 | orchestrator | 2025-04-17 01:53:05 | INFO  | Task b635309c-1dd8-4ca3-9fd9-488a4d517784 is in state STARTED 2025-04-17 01:53:05.720058 | orchestrator | 2025-04-17 01:53:05 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:05.720178 | orchestrator | 2025-04-17 01:53:05 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:08.765238 | orchestrator | 2025-04-17 01:53:08 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:08.765760 | orchestrator | 2025-04-17 01:53:08 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:08.766868 | orchestrator | 2025-04-17 01:53:08 | INFO  | Task b635309c-1dd8-4ca3-9fd9-488a4d517784 is in state STARTED 2025-04-17 01:53:08.767226 | orchestrator | 2025-04-17 01:53:08 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:08.769708 | orchestrator | 2025-04-17 01:53:08 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:11.814590 | orchestrator | 2025-04-17 01:53:11 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:11.816064 | orchestrator | 2025-04-17 01:53:11 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:11.817191 | orchestrator | 2025-04-17 01:53:11 | INFO  | Task b635309c-1dd8-4ca3-9fd9-488a4d517784 is in state STARTED 2025-04-17 01:53:11.818089 | orchestrator | 2025-04-17 01:53:11 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:11.819530 | orchestrator | 2025-04-17 01:53:11 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:14.867221 | orchestrator | 2025-04-17 01:53:14 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:14.867861 | orchestrator | 2025-04-17 01:53:14 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:14.868640 | orchestrator | 2025-04-17 01:53:14 | INFO  | Task b635309c-1dd8-4ca3-9fd9-488a4d517784 is in state SUCCESS 2025-04-17 01:53:14.872813 | orchestrator | 2025-04-17 01:53:14 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:17.918676 | orchestrator | 2025-04-17 01:53:14 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:17.918865 | orchestrator | 2025-04-17 01:53:17 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:17.919916 | orchestrator | 2025-04-17 01:53:17 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:17.921721 | orchestrator | 2025-04-17 01:53:17 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:17.921968 | orchestrator | 2025-04-17 01:53:17 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:20.973334 | orchestrator | 2025-04-17 01:53:20 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:20.976907 | orchestrator | 2025-04-17 01:53:20 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:20.977492 | orchestrator | 2025-04-17 01:53:20 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state STARTED 2025-04-17 01:53:20.977595 | orchestrator | 2025-04-17 01:53:20 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:24.020339 | orchestrator | 2025-04-17 01:53:24 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:24.020775 | orchestrator | 2025-04-17 01:53:24 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:24.022071 | orchestrator | 2025-04-17 01:53:24 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:24.022839 | orchestrator | 2025-04-17 01:53:24 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:24.029715 | orchestrator | 2025-04-17 01:53:24 | INFO  | Task 1bc4fa3e-53f6-42a8-bce6-33bc25385914 is in state SUCCESS 2025-04-17 01:53:24.031244 | orchestrator | 2025-04-17 01:53:24.031286 | orchestrator | None 2025-04-17 01:53:24.031301 | orchestrator | 2025-04-17 01:53:24.031316 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:53:24.031331 | orchestrator | 2025-04-17 01:53:24.031345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:53:24.031359 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.466) 0:00:00.466 ******** 2025-04-17 01:53:24.031373 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.031454 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.031473 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.031488 | orchestrator | 2025-04-17 01:53:24.031503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:53:24.031517 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.684) 0:00:01.150 ******** 2025-04-17 01:53:24.031576 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-17 01:53:24.031594 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-17 01:53:24.031693 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-17 01:53:24.031773 | orchestrator | 2025-04-17 01:53:24.031792 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-17 01:53:24.031875 | orchestrator | 2025-04-17 01:53:24.031904 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-17 01:53:24.031919 | orchestrator | Thursday 17 April 2025 01:46:36 +0000 (0:00:00.351) 0:00:01.501 ******** 2025-04-17 01:53:24.031933 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.031984 | orchestrator | 2025-04-17 01:53:24.031998 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-17 01:53:24.032012 | orchestrator | Thursday 17 April 2025 01:46:37 +0000 (0:00:00.983) 0:00:02.485 ******** 2025-04-17 01:53:24.032026 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.032041 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.032055 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.032069 | orchestrator | 2025-04-17 01:53:24.032083 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-17 01:53:24.032096 | orchestrator | Thursday 17 April 2025 01:46:37 +0000 (0:00:00.650) 0:00:03.135 ******** 2025-04-17 01:53:24.032110 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.032124 | orchestrator | 2025-04-17 01:53:24.032138 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-17 01:53:24.032152 | orchestrator | Thursday 17 April 2025 01:46:38 +0000 (0:00:00.758) 0:00:03.894 ******** 2025-04-17 01:53:24.032165 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.032179 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.032193 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.032207 | orchestrator | 2025-04-17 01:53:24.032284 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-17 01:53:24.032300 | orchestrator | Thursday 17 April 2025 01:46:39 +0000 (0:00:00.737) 0:00:04.632 ******** 2025-04-17 01:53:24.032314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-17 01:53:24.032328 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-17 01:53:24.032342 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-17 01:53:24.032355 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-17 01:53:24.032369 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-17 01:53:24.032382 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-17 01:53:24.032396 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-17 01:53:24.032411 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-17 01:53:24.032425 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-17 01:53:24.032439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-17 01:53:24.032454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-17 01:53:24.032467 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-17 01:53:24.032562 | orchestrator | 2025-04-17 01:53:24.032579 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-17 01:53:24.032593 | orchestrator | Thursday 17 April 2025 01:46:42 +0000 (0:00:03.602) 0:00:08.235 ******** 2025-04-17 01:53:24.032607 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-17 01:53:24.032621 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-17 01:53:24.032635 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-17 01:53:24.032659 | orchestrator | 2025-04-17 01:53:24.032673 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-17 01:53:24.032687 | orchestrator | Thursday 17 April 2025 01:46:43 +0000 (0:00:00.761) 0:00:08.997 ******** 2025-04-17 01:53:24.032701 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-17 01:53:24.032727 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-17 01:53:24.032741 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-17 01:53:24.032775 | orchestrator | 2025-04-17 01:53:24.032830 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-17 01:53:24.032848 | orchestrator | Thursday 17 April 2025 01:46:45 +0000 (0:00:02.004) 0:00:11.001 ******** 2025-04-17 01:53:24.032874 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-17 01:53:24.032888 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.032914 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-17 01:53:24.032929 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.032973 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-17 01:53:24.032987 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.033001 | orchestrator | 2025-04-17 01:53:24.033015 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-17 01:53:24.033046 | orchestrator | Thursday 17 April 2025 01:46:46 +0000 (0:00:00.904) 0:00:11.905 ******** 2025-04-17 01:53:24.033063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.033109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.033158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.033173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.033219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.033243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.033259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.033274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.033289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.033304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.033318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.033340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.033354 | orchestrator | 2025-04-17 01:53:24.033368 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-17 01:53:24.033382 | orchestrator | Thursday 17 April 2025 01:46:48 +0000 (0:00:02.468) 0:00:14.374 ******** 2025-04-17 01:53:24.033396 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.033410 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.033424 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.033438 | orchestrator | 2025-04-17 01:53:24.033458 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-17 01:53:24.033472 | orchestrator | Thursday 17 April 2025 01:46:50 +0000 (0:00:01.417) 0:00:15.791 ******** 2025-04-17 01:53:24.033486 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-17 01:53:24.033500 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-17 01:53:24.033514 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-17 01:53:24.033528 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-17 01:53:24.033542 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-17 01:53:24.033555 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-17 01:53:24.033569 | orchestrator | 2025-04-17 01:53:24.033583 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-17 01:53:24.033596 | orchestrator | Thursday 17 April 2025 01:46:52 +0000 (0:00:02.616) 0:00:18.408 ******** 2025-04-17 01:53:24.033610 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.033624 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.033638 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.033652 | orchestrator | 2025-04-17 01:53:24.033666 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-17 01:53:24.033680 | orchestrator | Thursday 17 April 2025 01:46:54 +0000 (0:00:01.546) 0:00:19.954 ******** 2025-04-17 01:53:24.033693 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.033707 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.033721 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.033735 | orchestrator | 2025-04-17 01:53:24.033749 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-17 01:53:24.033763 | orchestrator | Thursday 17 April 2025 01:46:56 +0000 (0:00:01.826) 0:00:21.781 ******** 2025-04-17 01:53:24.033778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.033800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.034002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.034074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.034099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.034120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.034132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.034143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.034162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.034173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.034183 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.034195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.034205 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.034225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.034237 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.034248 | orchestrator | 2025-04-17 01:53:24.034258 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-17 01:53:24.034268 | orchestrator | Thursday 17 April 2025 01:46:58 +0000 (0:00:01.806) 0:00:23.588 ******** 2025-04-17 01:53:24.034279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.034295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.034305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.034316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.034336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.034347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.034358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.034374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.034385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.034395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.034410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.034427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.036352 | orchestrator | 2025-04-17 01:53:24.036463 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-17 01:53:24.036485 | orchestrator | Thursday 17 April 2025 01:47:02 +0000 (0:00:04.298) 0:00:27.887 ******** 2025-04-17 01:53:24.036504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.036544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.036560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.036575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.036599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.036633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.036649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.036672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.036687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.036702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.036722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.036737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.036752 | orchestrator | 2025-04-17 01:53:24.036766 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-17 01:53:24.036781 | orchestrator | Thursday 17 April 2025 01:47:05 +0000 (0:00:03.222) 0:00:31.110 ******** 2025-04-17 01:53:24.036828 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-17 01:53:24.036846 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-17 01:53:24.036860 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-17 01:53:24.036883 | orchestrator | 2025-04-17 01:53:24.036899 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-17 01:53:24.036915 | orchestrator | Thursday 17 April 2025 01:47:08 +0000 (0:00:02.891) 0:00:34.001 ******** 2025-04-17 01:53:24.036931 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-17 01:53:24.036947 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-17 01:53:24.036964 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-17 01:53:24.036979 | orchestrator | 2025-04-17 01:53:24.036996 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-17 01:53:24.037012 | orchestrator | Thursday 17 April 2025 01:47:13 +0000 (0:00:04.484) 0:00:38.485 ******** 2025-04-17 01:53:24.037028 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.037045 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.037062 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.037078 | orchestrator | 2025-04-17 01:53:24.037095 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-17 01:53:24.037110 | orchestrator | Thursday 17 April 2025 01:47:14 +0000 (0:00:01.115) 0:00:39.601 ******** 2025-04-17 01:53:24.037127 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-17 01:53:24.037144 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-17 01:53:24.037160 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-17 01:53:24.037176 | orchestrator | 2025-04-17 01:53:24.037191 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-17 01:53:24.037207 | orchestrator | Thursday 17 April 2025 01:47:16 +0000 (0:00:02.317) 0:00:41.918 ******** 2025-04-17 01:53:24.037222 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-17 01:53:24.037239 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-17 01:53:24.037256 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-17 01:53:24.037270 | orchestrator | 2025-04-17 01:53:24.037284 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-17 01:53:24.037298 | orchestrator | Thursday 17 April 2025 01:47:19 +0000 (0:00:02.983) 0:00:44.901 ******** 2025-04-17 01:53:24.037312 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-17 01:53:24.037331 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-17 01:53:24.037346 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-17 01:53:24.037359 | orchestrator | 2025-04-17 01:53:24.037373 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-17 01:53:24.037387 | orchestrator | Thursday 17 April 2025 01:47:21 +0000 (0:00:02.059) 0:00:46.961 ******** 2025-04-17 01:53:24.037402 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-17 01:53:24.037416 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-17 01:53:24.037430 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-17 01:53:24.037444 | orchestrator | 2025-04-17 01:53:24.037462 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-17 01:53:24.037476 | orchestrator | Thursday 17 April 2025 01:47:23 +0000 (0:00:01.579) 0:00:48.540 ******** 2025-04-17 01:53:24.037490 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.037504 | orchestrator | 2025-04-17 01:53:24.037518 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-17 01:53:24.037538 | orchestrator | Thursday 17 April 2025 01:47:23 +0000 (0:00:00.620) 0:00:49.161 ******** 2025-04-17 01:53:24.037553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.037587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.037604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.037620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.037635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.037738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.037766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.037794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.037868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.037885 | orchestrator | 2025-04-17 01:53:24.037900 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-17 01:53:24.037914 | orchestrator | Thursday 17 April 2025 01:47:27 +0000 (0:00:03.312) 0:00:52.474 ******** 2025-04-17 01:53:24.037928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.037943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.037957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.037972 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.037988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.038011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.038101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.038118 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.038133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.038148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.038163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.038178 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.038193 | orchestrator | 2025-04-17 01:53:24.038208 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-17 01:53:24.038223 | orchestrator | Thursday 17 April 2025 01:47:27 +0000 (0:00:00.665) 0:00:53.139 ******** 2025-04-17 01:53:24.038246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.038261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.038282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.038297 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.038312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.038326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.038341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.038355 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.038370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-17 01:53:24.038392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-17 01:53:24.038407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-17 01:53:24.038421 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.038435 | orchestrator | 2025-04-17 01:53:24.038449 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-17 01:53:24.038471 | orchestrator | Thursday 17 April 2025 01:47:28 +0000 (0:00:01.227) 0:00:54.367 ******** 2025-04-17 01:53:24.038485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-17 01:53:24.038500 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-17 01:53:24.038514 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-17 01:53:24.038528 | orchestrator | 2025-04-17 01:53:24.038542 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-17 01:53:24.038557 | orchestrator | Thursday 17 April 2025 01:47:30 +0000 (0:00:01.895) 0:00:56.263 ******** 2025-04-17 01:53:24.038570 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-17 01:53:24.038584 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-17 01:53:24.038598 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-17 01:53:24.038612 | orchestrator | 2025-04-17 01:53:24.038626 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-17 01:53:24.038641 | orchestrator | Thursday 17 April 2025 01:47:32 +0000 (0:00:01.895) 0:00:58.159 ******** 2025-04-17 01:53:24.038656 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-17 01:53:24.038670 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-17 01:53:24.038689 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-17 01:53:24.038704 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-17 01:53:24.038726 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.038740 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-17 01:53:24.038754 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.038768 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-17 01:53:24.038782 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.038797 | orchestrator | 2025-04-17 01:53:24.038841 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-17 01:53:24.038856 | orchestrator | Thursday 17 April 2025 01:47:34 +0000 (0:00:02.233) 0:01:00.393 ******** 2025-04-17 01:53:24.038871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.038886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.038901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-17 01:53:24.038923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.038939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.038961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-17 01:53:24.038976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.038990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.039005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.039020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-17 01:53:24.039043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.039058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0', '__omit_place_holder__5444be0f56560ab98eb85ae01e9796d6493f36a0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-17 01:53:24.039125 | orchestrator | 2025-04-17 01:53:24.039141 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-17 01:53:24.039156 | orchestrator | Thursday 17 April 2025 01:47:38 +0000 (0:00:03.701) 0:01:04.094 ******** 2025-04-17 01:53:24.039170 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.039184 | orchestrator | 2025-04-17 01:53:24.039197 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-17 01:53:24.039211 | orchestrator | Thursday 17 April 2025 01:47:39 +0000 (0:00:00.610) 0:01:04.705 ******** 2025-04-17 01:53:24.039226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-17 01:53:24.039242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.039257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-17 01:53:24.039326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.039342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-17 01:53:24.039376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.039415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039452 | orchestrator | 2025-04-17 01:53:24.039466 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-17 01:53:24.039481 | orchestrator | Thursday 17 April 2025 01:47:42 +0000 (0:00:03.355) 0:01:08.061 ******** 2025-04-17 01:53:24.039497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-17 01:53:24.039512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.039526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039580 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.039597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-17 01:53:24.039612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.039626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039658 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.039673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-17 01:53:24.039696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.039721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.039752 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.039767 | orchestrator | 2025-04-17 01:53:24.039783 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-17 01:53:24.039797 | orchestrator | Thursday 17 April 2025 01:47:43 +0000 (0:00:00.735) 0:01:08.796 ******** 2025-04-17 01:53:24.039871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-17 01:53:24.039888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-17 01:53:24.039904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-17 01:53:24.039918 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.039933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-17 01:53:24.039953 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.039968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-17 01:53:24.039982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-17 01:53:24.039996 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.040016 | orchestrator | 2025-04-17 01:53:24.040032 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-17 01:53:24.040046 | orchestrator | Thursday 17 April 2025 01:47:44 +0000 (0:00:00.961) 0:01:09.758 ******** 2025-04-17 01:53:24.040060 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.040074 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.040088 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.040102 | orchestrator | 2025-04-17 01:53:24.040116 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-17 01:53:24.040130 | orchestrator | Thursday 17 April 2025 01:47:45 +0000 (0:00:01.164) 0:01:10.922 ******** 2025-04-17 01:53:24.040153 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.040167 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.040181 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.040195 | orchestrator | 2025-04-17 01:53:24.040209 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-17 01:53:24.040223 | orchestrator | Thursday 17 April 2025 01:47:47 +0000 (0:00:01.916) 0:01:12.839 ******** 2025-04-17 01:53:24.040238 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.040252 | orchestrator | 2025-04-17 01:53:24.040266 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-17 01:53:24.040280 | orchestrator | Thursday 17 April 2025 01:47:48 +0000 (0:00:00.754) 0:01:13.593 ******** 2025-04-17 01:53:24.040307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.040325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.040380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.040433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040460 | orchestrator | 2025-04-17 01:53:24.040474 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-17 01:53:24.040493 | orchestrator | Thursday 17 April 2025 01:47:52 +0000 (0:00:04.631) 0:01:18.224 ******** 2025-04-17 01:53:24.040507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.040527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040554 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.040568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.040599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040633 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.040652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.040668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.040694 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.040708 | orchestrator | 2025-04-17 01:53:24.040720 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-17 01:53:24.040734 | orchestrator | Thursday 17 April 2025 01:47:53 +0000 (0:00:01.113) 0:01:19.338 ******** 2025-04-17 01:53:24.040746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-17 01:53:24.040759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-17 01:53:24.040778 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.040791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-17 01:53:24.040824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-17 01:53:24.040839 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.040852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-17 01:53:24.040865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-17 01:53:24.040877 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.040890 | orchestrator | 2025-04-17 01:53:24.040903 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-17 01:53:24.040916 | orchestrator | Thursday 17 April 2025 01:47:54 +0000 (0:00:00.938) 0:01:20.276 ******** 2025-04-17 01:53:24.040929 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.040941 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.040954 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.040966 | orchestrator | 2025-04-17 01:53:24.040978 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-17 01:53:24.040990 | orchestrator | Thursday 17 April 2025 01:47:56 +0000 (0:00:01.246) 0:01:21.523 ******** 2025-04-17 01:53:24.041002 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.041015 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.041027 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.041039 | orchestrator | 2025-04-17 01:53:24.041051 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-17 01:53:24.041064 | orchestrator | Thursday 17 April 2025 01:47:57 +0000 (0:00:01.907) 0:01:23.431 ******** 2025-04-17 01:53:24.041076 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.041088 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.041100 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.041113 | orchestrator | 2025-04-17 01:53:24.041132 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-17 01:53:24.041144 | orchestrator | Thursday 17 April 2025 01:47:58 +0000 (0:00:00.248) 0:01:23.679 ******** 2025-04-17 01:53:24.041157 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.041169 | orchestrator | 2025-04-17 01:53:24.041182 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-17 01:53:24.041194 | orchestrator | Thursday 17 April 2025 01:47:58 +0000 (0:00:00.609) 0:01:24.289 ******** 2025-04-17 01:53:24.041207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-17 01:53:24.041241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-17 01:53:24.041255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-17 01:53:24.041269 | orchestrator | 2025-04-17 01:53:24.041281 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-17 01:53:24.041294 | orchestrator | Thursday 17 April 2025 01:48:02 +0000 (0:00:03.375) 0:01:27.665 ******** 2025-04-17 01:53:24.041307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-17 01:53:24.041320 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.041352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-17 01:53:24.041367 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.041381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-17 01:53:24.041400 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.041413 | orchestrator | 2025-04-17 01:53:24.041426 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-17 01:53:24.041439 | orchestrator | Thursday 17 April 2025 01:48:03 +0000 (0:00:01.337) 0:01:29.002 ******** 2025-04-17 01:53:24.041451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-17 01:53:24.041465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-17 01:53:24.041478 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.041491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-17 01:53:24.041505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-17 01:53:24.041518 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.041530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-17 01:53:24.041553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-17 01:53:24.041567 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.041580 | orchestrator | 2025-04-17 01:53:24.041593 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-17 01:53:24.041606 | orchestrator | Thursday 17 April 2025 01:48:05 +0000 (0:00:02.030) 0:01:31.033 ******** 2025-04-17 01:53:24.041625 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.041638 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.041650 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.041662 | orchestrator | 2025-04-17 01:53:24.041675 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-17 01:53:24.041687 | orchestrator | Thursday 17 April 2025 01:48:06 +0000 (0:00:00.732) 0:01:31.765 ******** 2025-04-17 01:53:24.041699 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.041711 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.041724 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.041736 | orchestrator | 2025-04-17 01:53:24.041749 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-17 01:53:24.041761 | orchestrator | Thursday 17 April 2025 01:48:07 +0000 (0:00:01.356) 0:01:33.122 ******** 2025-04-17 01:53:24.041774 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.041786 | orchestrator | 2025-04-17 01:53:24.041799 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-17 01:53:24.041828 | orchestrator | Thursday 17 April 2025 01:48:08 +0000 (0:00:00.771) 0:01:33.893 ******** 2025-04-17 01:53:24.041841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.041855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.041868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.041900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.041921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.041935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.041949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.041963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.041984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042110 | orchestrator | 2025-04-17 01:53:24.042124 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-17 01:53:24.042137 | orchestrator | Thursday 17 April 2025 01:48:12 +0000 (0:00:03.736) 0:01:37.629 ******** 2025-04-17 01:53:24.042150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.042163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042219 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.042231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.042253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042307 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.042320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.042333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.042388 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.042402 | orchestrator | 2025-04-17 01:53:24.042414 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-17 01:53:24.042431 | orchestrator | Thursday 17 April 2025 01:48:12 +0000 (0:00:00.693) 0:01:38.322 ******** 2025-04-17 01:53:24.042446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-17 01:53:24.042466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-17 01:53:24.042480 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.042493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-17 01:53:24.042507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-17 01:53:24.042520 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.042533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-17 01:53:24.042546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-17 01:53:24.042558 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.042570 | orchestrator | 2025-04-17 01:53:24.042583 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-17 01:53:24.042596 | orchestrator | Thursday 17 April 2025 01:48:13 +0000 (0:00:00.917) 0:01:39.240 ******** 2025-04-17 01:53:24.042608 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.042621 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.042633 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.042645 | orchestrator | 2025-04-17 01:53:24.042657 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-17 01:53:24.042670 | orchestrator | Thursday 17 April 2025 01:48:14 +0000 (0:00:01.207) 0:01:40.447 ******** 2025-04-17 01:53:24.042683 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.042695 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.042707 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.042719 | orchestrator | 2025-04-17 01:53:24.042732 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-17 01:53:24.042744 | orchestrator | Thursday 17 April 2025 01:48:16 +0000 (0:00:01.759) 0:01:42.207 ******** 2025-04-17 01:53:24.042756 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.042769 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.042781 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.042794 | orchestrator | 2025-04-17 01:53:24.042822 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-17 01:53:24.042836 | orchestrator | Thursday 17 April 2025 01:48:17 +0000 (0:00:00.259) 0:01:42.466 ******** 2025-04-17 01:53:24.042848 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.042871 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.042884 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.042901 | orchestrator | 2025-04-17 01:53:24.042913 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-17 01:53:24.042926 | orchestrator | Thursday 17 April 2025 01:48:17 +0000 (0:00:00.360) 0:01:42.827 ******** 2025-04-17 01:53:24.042938 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.042950 | orchestrator | 2025-04-17 01:53:24.042962 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-17 01:53:24.042974 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.877) 0:01:43.704 ******** 2025-04-17 01:53:24.042988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-17 01:53:24.043019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-17 01:53:24.043043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-17 01:53:24.043090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-17 01:53:24.043124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-17 01:53:24.043145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-17 01:53:24.043222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043359 | orchestrator | 2025-04-17 01:53:24.043378 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-17 01:53:24.043391 | orchestrator | Thursday 17 April 2025 01:48:23 +0000 (0:00:05.135) 0:01:48.839 ******** 2025-04-17 01:53:24.043404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-17 01:53:24.043417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-17 01:53:24.043436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043515 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.043528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-17 01:53:24.043548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-17 01:53:24.043565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043648 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.043669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-17 01:53:24.043682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-17 01:53:24.043695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.043775 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.043788 | orchestrator | 2025-04-17 01:53:24.043800 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-17 01:53:24.043865 | orchestrator | Thursday 17 April 2025 01:48:24 +0000 (0:00:00.993) 0:01:49.832 ******** 2025-04-17 01:53:24.043878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-17 01:53:24.043891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-17 01:53:24.043905 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.043917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-17 01:53:24.043930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-17 01:53:24.043943 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.043955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-17 01:53:24.043967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-17 01:53:24.043979 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.043992 | orchestrator | 2025-04-17 01:53:24.044004 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-17 01:53:24.044016 | orchestrator | Thursday 17 April 2025 01:48:25 +0000 (0:00:01.172) 0:01:51.005 ******** 2025-04-17 01:53:24.044028 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.044040 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.044053 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.044065 | orchestrator | 2025-04-17 01:53:24.044077 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-17 01:53:24.044089 | orchestrator | Thursday 17 April 2025 01:48:26 +0000 (0:00:01.304) 0:01:52.309 ******** 2025-04-17 01:53:24.044101 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.044114 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.044126 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.044138 | orchestrator | 2025-04-17 01:53:24.044150 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-17 01:53:24.044163 | orchestrator | Thursday 17 April 2025 01:48:28 +0000 (0:00:01.890) 0:01:54.200 ******** 2025-04-17 01:53:24.044182 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.044194 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.044207 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.044219 | orchestrator | 2025-04-17 01:53:24.044232 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-17 01:53:24.044251 | orchestrator | Thursday 17 April 2025 01:48:29 +0000 (0:00:00.457) 0:01:54.657 ******** 2025-04-17 01:53:24.044264 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.044276 | orchestrator | 2025-04-17 01:53:24.044288 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-17 01:53:24.044300 | orchestrator | Thursday 17 April 2025 01:48:30 +0000 (0:00:01.043) 0:01:55.700 ******** 2025-04-17 01:53:24.044323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-17 01:53:24.044338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.044375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-17 01:53:24.044398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.044420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-17 01:53:24.044444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.044455 | orchestrator | 2025-04-17 01:53:24.044466 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-17 01:53:24.044476 | orchestrator | Thursday 17 April 2025 01:48:35 +0000 (0:00:05.008) 0:02:00.709 ******** 2025-04-17 01:53:24.044494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-17 01:53:24.044517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.044528 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.044546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-17 01:53:24.044569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.044580 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.044591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-17 01:53:24.044620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.044639 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.044649 | orchestrator | 2025-04-17 01:53:24.044660 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-17 01:53:24.044670 | orchestrator | Thursday 17 April 2025 01:48:38 +0000 (0:00:03.073) 0:02:03.782 ******** 2025-04-17 01:53:24.044681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-17 01:53:24.044692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-17 01:53:24.044712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-17 01:53:24.044724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-17 01:53:24.044740 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.044751 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.044761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-17 01:53:24.044772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-17 01:53:24.044782 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.044793 | orchestrator | 2025-04-17 01:53:24.044818 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-17 01:53:24.044835 | orchestrator | Thursday 17 April 2025 01:48:43 +0000 (0:00:04.872) 0:02:08.655 ******** 2025-04-17 01:53:24.044846 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.044856 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.044866 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.044876 | orchestrator | 2025-04-17 01:53:24.044886 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-17 01:53:24.044896 | orchestrator | Thursday 17 April 2025 01:48:44 +0000 (0:00:01.190) 0:02:09.845 ******** 2025-04-17 01:53:24.044906 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.044917 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.044927 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.044937 | orchestrator | 2025-04-17 01:53:24.044947 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-17 01:53:24.044957 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:01.696) 0:02:11.542 ******** 2025-04-17 01:53:24.044967 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.044977 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.044987 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.044997 | orchestrator | 2025-04-17 01:53:24.045007 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-17 01:53:24.045017 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:00.365) 0:02:11.907 ******** 2025-04-17 01:53:24.045033 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.045044 | orchestrator | 2025-04-17 01:53:24.045053 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-17 01:53:24.045064 | orchestrator | Thursday 17 April 2025 01:48:47 +0000 (0:00:00.893) 0:02:12.801 ******** 2025-04-17 01:53:24.045074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-17 01:53:24.045085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-17 01:53:24.045103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-17 01:53:24.045113 | orchestrator | 2025-04-17 01:53:24.045124 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-17 01:53:24.045134 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:03.296) 0:02:16.097 ******** 2025-04-17 01:53:24.045144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-17 01:53:24.045155 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.045165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-17 01:53:24.045181 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.045191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-17 01:53:24.045202 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.045212 | orchestrator | 2025-04-17 01:53:24.045222 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-17 01:53:24.045232 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:00.344) 0:02:16.441 ******** 2025-04-17 01:53:24.045242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-17 01:53:24.045257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-17 01:53:24.045268 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.045278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-17 01:53:24.045288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-17 01:53:24.045298 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.045309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-17 01:53:24.045324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-17 01:53:24.045497 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.045515 | orchestrator | 2025-04-17 01:53:24.045525 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-17 01:53:24.045535 | orchestrator | Thursday 17 April 2025 01:48:51 +0000 (0:00:00.719) 0:02:17.160 ******** 2025-04-17 01:53:24.045545 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.045555 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.045565 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.045575 | orchestrator | 2025-04-17 01:53:24.045585 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-17 01:53:24.045595 | orchestrator | Thursday 17 April 2025 01:48:52 +0000 (0:00:01.091) 0:02:18.252 ******** 2025-04-17 01:53:24.045605 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.045615 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.045625 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.045635 | orchestrator | 2025-04-17 01:53:24.045645 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-17 01:53:24.045655 | orchestrator | Thursday 17 April 2025 01:48:55 +0000 (0:00:02.220) 0:02:20.473 ******** 2025-04-17 01:53:24.045665 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.045683 | orchestrator | 2025-04-17 01:53:24.045693 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-04-17 01:53:24.045703 | orchestrator | Thursday 17 April 2025 01:48:56 +0000 (0:00:01.295) 0:02:21.769 ******** 2025-04-17 01:53:24.045714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.045736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.045748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.045770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.045782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.045800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.045829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.045847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.045858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.045869 | orchestrator | 2025-04-17 01:53:24.045884 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-04-17 01:53:24.045894 | orchestrator | Thursday 17 April 2025 01:49:03 +0000 (0:00:07.298) 0:02:29.067 ******** 2025-04-17 01:53:24.045905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.045921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.045933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.045943 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.045960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.045976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.045992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.046003 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.046014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.046064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.046083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.046094 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.046107 | orchestrator | 2025-04-17 01:53:24.046119 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-04-17 01:53:24.046131 | orchestrator | Thursday 17 April 2025 01:49:04 +0000 (0:00:01.078) 0:02:30.146 ******** 2025-04-17 01:53:24.046142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046205 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.046217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046270 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.046282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-17 01:53:24.046330 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.046342 | orchestrator | 2025-04-17 01:53:24.046353 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-04-17 01:53:24.046364 | orchestrator | Thursday 17 April 2025 01:49:05 +0000 (0:00:01.228) 0:02:31.374 ******** 2025-04-17 01:53:24.046376 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.046388 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.046399 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.046419 | orchestrator | 2025-04-17 01:53:24.046431 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-04-17 01:53:24.046442 | orchestrator | Thursday 17 April 2025 01:49:07 +0000 (0:00:01.308) 0:02:32.683 ******** 2025-04-17 01:53:24.046454 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.046464 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.046474 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.046484 | orchestrator | 2025-04-17 01:53:24.046494 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-17 01:53:24.046504 | orchestrator | Thursday 17 April 2025 01:49:09 +0000 (0:00:02.090) 0:02:34.773 ******** 2025-04-17 01:53:24.046518 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.046528 | orchestrator | 2025-04-17 01:53:24.046538 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-17 01:53:24.046548 | orchestrator | Thursday 17 April 2025 01:49:10 +0000 (0:00:01.003) 0:02:35.776 ******** 2025-04-17 01:53:24.046579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:53:24.046599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:53:24.046625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:53:24.046644 | orchestrator | 2025-04-17 01:53:24.046655 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-17 01:53:24.046666 | orchestrator | Thursday 17 April 2025 01:49:14 +0000 (0:00:03.843) 0:02:39.620 ******** 2025-04-17 01:53:24.046676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:53:24.046700 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.046716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:53:24.046735 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.046746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:53:24.046762 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.046772 | orchestrator | 2025-04-17 01:53:24.046786 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-17 01:53:24.046796 | orchestrator | Thursday 17 April 2025 01:49:15 +0000 (0:00:00.848) 0:02:40.468 ******** 2025-04-17 01:53:24.046889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-17 01:53:24.046903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-17 01:53:24.046915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-17 01:53:24.046926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-17 01:53:24.046937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-17 01:53:24.046948 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.046962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-17 01:53:24.046974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-17 01:53:24.046985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-17 01:53:24.047003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-17 01:53:24.047013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-17 01:53:24.047023 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.047034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-17 01:53:24.047044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-17 01:53:24.047060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-17 01:53:24.047071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-17 01:53:24.047082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-17 01:53:24.047092 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.047102 | orchestrator | 2025-04-17 01:53:24.047112 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-17 01:53:24.047122 | orchestrator | Thursday 17 April 2025 01:49:16 +0000 (0:00:01.259) 0:02:41.728 ******** 2025-04-17 01:53:24.047132 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.047142 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.047152 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.047162 | orchestrator | 2025-04-17 01:53:24.047172 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-17 01:53:24.047182 | orchestrator | Thursday 17 April 2025 01:49:17 +0000 (0:00:01.344) 0:02:43.072 ******** 2025-04-17 01:53:24.047191 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.047201 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.047211 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.047250 | orchestrator | 2025-04-17 01:53:24.047262 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-17 01:53:24.047272 | orchestrator | Thursday 17 April 2025 01:49:20 +0000 (0:00:02.437) 0:02:45.510 ******** 2025-04-17 01:53:24.047282 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.047292 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.047302 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.047312 | orchestrator | 2025-04-17 01:53:24.047322 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-17 01:53:24.047338 | orchestrator | Thursday 17 April 2025 01:49:20 +0000 (0:00:00.480) 0:02:45.990 ******** 2025-04-17 01:53:24.047348 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.047358 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.047368 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.047378 | orchestrator | 2025-04-17 01:53:24.047388 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-17 01:53:24.047396 | orchestrator | Thursday 17 April 2025 01:49:20 +0000 (0:00:00.287) 0:02:46.278 ******** 2025-04-17 01:53:24.047405 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.047414 | orchestrator | 2025-04-17 01:53:24.047422 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-17 01:53:24.047431 | orchestrator | Thursday 17 April 2025 01:49:22 +0000 (0:00:01.231) 0:02:47.510 ******** 2025-04-17 01:53:24.047440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:53:24.047451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:53:24.047464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:53:24.047474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:53:24.047490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:53:24.047500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:53:24.047508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:53:24.047522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:53:24.047531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:53:24.047540 | orchestrator | 2025-04-17 01:53:24.047549 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-17 01:53:24.047563 | orchestrator | Thursday 17 April 2025 01:49:26 +0000 (0:00:04.399) 0:02:51.909 ******** 2025-04-17 01:53:24.047581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:53:24.047591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:53:24.047600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:53:24.047609 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.047623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:53:24.047633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:53:24.047657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:53:24.047666 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.047675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:53:24.047684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:53:24.047694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:53:24.047703 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.047712 | orchestrator | 2025-04-17 01:53:24.047720 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-17 01:53:24.047729 | orchestrator | Thursday 17 April 2025 01:49:27 +0000 (0:00:00.904) 0:02:52.813 ******** 2025-04-17 01:53:24.047741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-17 01:53:24.047754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-17 01:53:24.047769 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.047778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-17 01:53:24.047787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-17 01:53:24.047797 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.047818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-17 01:53:24.047828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-17 01:53:24.047836 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.047846 | orchestrator | 2025-04-17 01:53:24.047854 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-17 01:53:24.047862 | orchestrator | Thursday 17 April 2025 01:49:28 +0000 (0:00:00.937) 0:02:53.751 ******** 2025-04-17 01:53:24.047871 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.047879 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.047888 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.047896 | orchestrator | 2025-04-17 01:53:24.047905 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-17 01:53:24.047913 | orchestrator | Thursday 17 April 2025 01:49:29 +0000 (0:00:01.587) 0:02:55.338 ******** 2025-04-17 01:53:24.047922 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.047930 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.047939 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.047947 | orchestrator | 2025-04-17 01:53:24.047956 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-17 01:53:24.047964 | orchestrator | Thursday 17 April 2025 01:49:32 +0000 (0:00:02.189) 0:02:57.528 ******** 2025-04-17 01:53:24.047973 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.047981 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.047990 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.047998 | orchestrator | 2025-04-17 01:53:24.048007 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-17 01:53:24.048015 | orchestrator | Thursday 17 April 2025 01:49:32 +0000 (0:00:00.293) 0:02:57.821 ******** 2025-04-17 01:53:24.048024 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.048032 | orchestrator | 2025-04-17 01:53:24.048045 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-17 01:53:24.048054 | orchestrator | Thursday 17 April 2025 01:49:33 +0000 (0:00:01.245) 0:02:59.067 ******** 2025-04-17 01:53:24.048063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-17 01:53:24.048082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-17 01:53:24.048109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-17 01:53:24.048133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048147 | orchestrator | 2025-04-17 01:53:24.048155 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-17 01:53:24.048164 | orchestrator | Thursday 17 April 2025 01:49:37 +0000 (0:00:04.167) 0:03:03.234 ******** 2025-04-17 01:53:24.048177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-17 01:53:24.048186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048196 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.048205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-17 01:53:24.048220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048237 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.048250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-17 01:53:24.048260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048269 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.048277 | orchestrator | 2025-04-17 01:53:24.048286 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-17 01:53:24.048295 | orchestrator | Thursday 17 April 2025 01:49:38 +0000 (0:00:01.058) 0:03:04.292 ******** 2025-04-17 01:53:24.048303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-17 01:53:24.048312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-17 01:53:24.048325 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.048334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-17 01:53:24.048343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-17 01:53:24.048352 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.048361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-17 01:53:24.048370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-17 01:53:24.048378 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.048387 | orchestrator | 2025-04-17 01:53:24.048395 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-17 01:53:24.048404 | orchestrator | Thursday 17 April 2025 01:49:40 +0000 (0:00:01.191) 0:03:05.483 ******** 2025-04-17 01:53:24.048412 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.048426 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.048435 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.048444 | orchestrator | 2025-04-17 01:53:24.048452 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-17 01:53:24.048461 | orchestrator | Thursday 17 April 2025 01:49:41 +0000 (0:00:01.373) 0:03:06.857 ******** 2025-04-17 01:53:24.048470 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.048478 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.048487 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.048496 | orchestrator | 2025-04-17 01:53:24.048504 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-17 01:53:24.048513 | orchestrator | Thursday 17 April 2025 01:49:43 +0000 (0:00:02.186) 0:03:09.044 ******** 2025-04-17 01:53:24.048521 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.048530 | orchestrator | 2025-04-17 01:53:24.048538 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-17 01:53:24.048547 | orchestrator | Thursday 17 April 2025 01:49:44 +0000 (0:00:01.167) 0:03:10.211 ******** 2025-04-17 01:53:24.048560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-17 01:53:24.048569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-17 01:53:24.048680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-17 01:53:24.048721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048752 | orchestrator | 2025-04-17 01:53:24.048761 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-17 01:53:24.048770 | orchestrator | Thursday 17 April 2025 01:49:48 +0000 (0:00:03.704) 0:03:13.915 ******** 2025-04-17 01:53:24.048783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-17 01:53:24.048792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-17 01:53:24.048837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048855 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.048868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.048990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.049005 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.049014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-17 01:53:24.049033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.049043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.049052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.049061 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.049070 | orchestrator | 2025-04-17 01:53:24.049078 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-17 01:53:24.049087 | orchestrator | Thursday 17 April 2025 01:49:49 +0000 (0:00:00.695) 0:03:14.611 ******** 2025-04-17 01:53:24.049096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-17 01:53:24.049158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-17 01:53:24.049171 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.049188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-17 01:53:24.049197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-17 01:53:24.049206 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.049216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-17 01:53:24.049224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-17 01:53:24.049239 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.049249 | orchestrator | 2025-04-17 01:53:24.049257 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-17 01:53:24.049266 | orchestrator | Thursday 17 April 2025 01:49:50 +0000 (0:00:00.952) 0:03:15.563 ******** 2025-04-17 01:53:24.049274 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.049283 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.049291 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.049299 | orchestrator | 2025-04-17 01:53:24.049308 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-17 01:53:24.049316 | orchestrator | Thursday 17 April 2025 01:49:51 +0000 (0:00:01.199) 0:03:16.763 ******** 2025-04-17 01:53:24.049325 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.049334 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.049342 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.049350 | orchestrator | 2025-04-17 01:53:24.049359 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-17 01:53:24.049367 | orchestrator | Thursday 17 April 2025 01:49:53 +0000 (0:00:02.356) 0:03:19.119 ******** 2025-04-17 01:53:24.049376 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.049384 | orchestrator | 2025-04-17 01:53:24.049393 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-17 01:53:24.049401 | orchestrator | Thursday 17 April 2025 01:49:55 +0000 (0:00:01.500) 0:03:20.620 ******** 2025-04-17 01:53:24.049410 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:53:24.049419 | orchestrator | 2025-04-17 01:53:24.049427 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-17 01:53:24.049436 | orchestrator | Thursday 17 April 2025 01:49:58 +0000 (0:00:03.159) 0:03:23.779 ******** 2025-04-17 01:53:24.049445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-17 01:53:24.049503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-17 01:53:24.049522 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.049532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-17 01:53:24.049542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-17 01:53:24.049551 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.049606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-17 01:53:24.049626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-17 01:53:24.049635 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.049644 | orchestrator | 2025-04-17 01:53:24.049653 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-17 01:53:24.049661 | orchestrator | Thursday 17 April 2025 01:50:01 +0000 (0:00:03.576) 0:03:27.356 ******** 2025-04-17 01:53:24.049671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-17 01:53:24.049730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-17 01:53:24.049749 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.049759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-17 01:53:24.049769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-17 01:53:24.049778 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.049879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-17 01:53:24.049903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-17 01:53:24.049912 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.049924 | orchestrator | 2025-04-17 01:53:24.049933 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-17 01:53:24.049942 | orchestrator | Thursday 17 April 2025 01:50:05 +0000 (0:00:03.274) 0:03:30.630 ******** 2025-04-17 01:53:24.049951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-17 01:53:24.049960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-17 01:53:24.049969 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.049977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-17 01:53:24.049986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-17 01:53:24.050001 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-17 01:53:24.050104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-17 01:53:24.050113 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050122 | orchestrator | 2025-04-17 01:53:24.050130 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-17 01:53:24.050139 | orchestrator | Thursday 17 April 2025 01:50:08 +0000 (0:00:03.285) 0:03:33.916 ******** 2025-04-17 01:53:24.050148 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.050156 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.050165 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.050174 | orchestrator | 2025-04-17 01:53:24.050182 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-17 01:53:24.050191 | orchestrator | Thursday 17 April 2025 01:50:10 +0000 (0:00:02.064) 0:03:35.980 ******** 2025-04-17 01:53:24.050200 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.050208 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050216 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050225 | orchestrator | 2025-04-17 01:53:24.050234 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-17 01:53:24.050242 | orchestrator | Thursday 17 April 2025 01:50:12 +0000 (0:00:01.829) 0:03:37.809 ******** 2025-04-17 01:53:24.050251 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.050259 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050268 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050276 | orchestrator | 2025-04-17 01:53:24.050285 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-17 01:53:24.050293 | orchestrator | Thursday 17 April 2025 01:50:12 +0000 (0:00:00.301) 0:03:38.110 ******** 2025-04-17 01:53:24.050301 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.050310 | orchestrator | 2025-04-17 01:53:24.050318 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-17 01:53:24.050327 | orchestrator | Thursday 17 April 2025 01:50:14 +0000 (0:00:01.384) 0:03:39.495 ******** 2025-04-17 01:53:24.050336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-17 01:53:24.050352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-17 01:53:24.050409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-17 01:53:24.050421 | orchestrator | 2025-04-17 01:53:24.050429 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-17 01:53:24.050437 | orchestrator | Thursday 17 April 2025 01:50:15 +0000 (0:00:01.608) 0:03:41.104 ******** 2025-04-17 01:53:24.050445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-17 01:53:24.050454 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.050462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-17 01:53:24.050470 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-17 01:53:24.050494 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050502 | orchestrator | 2025-04-17 01:53:24.050510 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-17 01:53:24.050518 | orchestrator | Thursday 17 April 2025 01:50:16 +0000 (0:00:00.564) 0:03:41.668 ******** 2025-04-17 01:53:24.050526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-17 01:53:24.050535 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.050543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-17 01:53:24.050551 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-17 01:53:24.050567 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050575 | orchestrator | 2025-04-17 01:53:24.050627 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-17 01:53:24.050639 | orchestrator | Thursday 17 April 2025 01:50:16 +0000 (0:00:00.721) 0:03:42.389 ******** 2025-04-17 01:53:24.050647 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.050655 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050663 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050671 | orchestrator | 2025-04-17 01:53:24.050679 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-17 01:53:24.050686 | orchestrator | Thursday 17 April 2025 01:50:17 +0000 (0:00:00.832) 0:03:43.222 ******** 2025-04-17 01:53:24.050694 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.050703 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050711 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050718 | orchestrator | 2025-04-17 01:53:24.050726 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-17 01:53:24.050734 | orchestrator | Thursday 17 April 2025 01:50:19 +0000 (0:00:01.478) 0:03:44.700 ******** 2025-04-17 01:53:24.050742 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.050750 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.050758 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.050766 | orchestrator | 2025-04-17 01:53:24.050774 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-17 01:53:24.050782 | orchestrator | Thursday 17 April 2025 01:50:19 +0000 (0:00:00.299) 0:03:45.000 ******** 2025-04-17 01:53:24.050790 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.050798 | orchestrator | 2025-04-17 01:53:24.050822 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-17 01:53:24.050831 | orchestrator | Thursday 17 April 2025 01:50:21 +0000 (0:00:01.526) 0:03:46.526 ******** 2025-04-17 01:53:24.050840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-17 01:53:24.050855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.050864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.050923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.050936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-17 01:53:24.050945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-17 01:53:24.050962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.050975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.050985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.051081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-17 01:53:24.051122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.051155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-17 01:53:24.051211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.051244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.051330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.051343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.051437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-17 01:53:24.051455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.051566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.051640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.051657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.051741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.051750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.051767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.051781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051790 | orchestrator | 2025-04-17 01:53:24.051798 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-17 01:53:24.051849 | orchestrator | Thursday 17 April 2025 01:50:26 +0000 (0:00:05.162) 0:03:51.689 ******** 2025-04-17 01:53:24.051917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-17 01:53:24.051930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.051963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-17 01:53:24.052023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-17 01:53:24.052080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.052142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.052186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-17 01:53:24.052267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.052300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.052319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052386 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.052393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.052407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-17 01:53:24.052469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.052479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.052581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-17 01:53:24.052589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.052602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052622 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.052630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.052711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.052732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-17 01:53:24.052740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-17 01:53:24.052820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-17 01:53:24.052828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.052842 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.052849 | orchestrator | 2025-04-17 01:53:24.052857 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-17 01:53:24.052864 | orchestrator | Thursday 17 April 2025 01:50:28 +0000 (0:00:01.893) 0:03:53.583 ******** 2025-04-17 01:53:24.052871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-17 01:53:24.052881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-17 01:53:24.052888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-17 01:53:24.052895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-17 01:53:24.052902 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.052909 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.052916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-17 01:53:24.052923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-17 01:53:24.052930 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.052937 | orchestrator | 2025-04-17 01:53:24.052944 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-17 01:53:24.052954 | orchestrator | Thursday 17 April 2025 01:50:30 +0000 (0:00:02.088) 0:03:55.672 ******** 2025-04-17 01:53:24.052961 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.052968 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.052994 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.053006 | orchestrator | 2025-04-17 01:53:24.053013 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-17 01:53:24.053020 | orchestrator | Thursday 17 April 2025 01:50:31 +0000 (0:00:01.408) 0:03:57.080 ******** 2025-04-17 01:53:24.053027 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.053034 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.053041 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.053048 | orchestrator | 2025-04-17 01:53:24.053055 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-17 01:53:24.053062 | orchestrator | Thursday 17 April 2025 01:50:33 +0000 (0:00:02.312) 0:03:59.392 ******** 2025-04-17 01:53:24.053069 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.053076 | orchestrator | 2025-04-17 01:53:24.053083 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-17 01:53:24.053090 | orchestrator | Thursday 17 April 2025 01:50:35 +0000 (0:00:01.592) 0:04:00.985 ******** 2025-04-17 01:53:24.053097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.053109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.053116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.053123 | orchestrator | 2025-04-17 01:53:24.053130 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-17 01:53:24.053137 | orchestrator | Thursday 17 April 2025 01:50:39 +0000 (0:00:03.674) 0:04:04.659 ******** 2025-04-17 01:53:24.053168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.053177 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.053185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.053196 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.053203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.053210 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.053217 | orchestrator | 2025-04-17 01:53:24.053224 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-17 01:53:24.053231 | orchestrator | Thursday 17 April 2025 01:50:39 +0000 (0:00:00.497) 0:04:05.156 ******** 2025-04-17 01:53:24.053238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053253 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.053260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053274 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.053281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053295 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.053302 | orchestrator | 2025-04-17 01:53:24.053309 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-17 01:53:24.053333 | orchestrator | Thursday 17 April 2025 01:50:40 +0000 (0:00:01.159) 0:04:06.316 ******** 2025-04-17 01:53:24.053341 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.053348 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.053355 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.053361 | orchestrator | 2025-04-17 01:53:24.053368 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-17 01:53:24.053375 | orchestrator | Thursday 17 April 2025 01:50:42 +0000 (0:00:01.150) 0:04:07.466 ******** 2025-04-17 01:53:24.053382 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.053393 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.053400 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.053407 | orchestrator | 2025-04-17 01:53:24.053414 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-17 01:53:24.053421 | orchestrator | Thursday 17 April 2025 01:50:44 +0000 (0:00:02.331) 0:04:09.797 ******** 2025-04-17 01:53:24.053429 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.053437 | orchestrator | 2025-04-17 01:53:24.053445 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-17 01:53:24.053453 | orchestrator | Thursday 17 April 2025 01:50:45 +0000 (0:00:01.620) 0:04:11.418 ******** 2025-04-17 01:53:24.053461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.053475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.053521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.053553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053592 | orchestrator | 2025-04-17 01:53:24.053600 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-17 01:53:24.053608 | orchestrator | Thursday 17 April 2025 01:50:50 +0000 (0:00:04.986) 0:04:16.404 ******** 2025-04-17 01:53:24.053616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.053631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053647 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.053655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.053686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053704 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.053718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.053726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.053746 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.053754 | orchestrator | 2025-04-17 01:53:24.053762 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-17 01:53:24.053770 | orchestrator | Thursday 17 April 2025 01:50:51 +0000 (0:00:01.012) 0:04:17.417 ******** 2025-04-17 01:53:24.053778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053838 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.053846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053874 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.053882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-17 01:53:24.053910 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.053916 | orchestrator | 2025-04-17 01:53:24.053923 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-17 01:53:24.053930 | orchestrator | Thursday 17 April 2025 01:50:53 +0000 (0:00:01.210) 0:04:18.627 ******** 2025-04-17 01:53:24.053937 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.053944 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.053950 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.053957 | orchestrator | 2025-04-17 01:53:24.053964 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-17 01:53:24.053977 | orchestrator | Thursday 17 April 2025 01:50:54 +0000 (0:00:01.479) 0:04:20.107 ******** 2025-04-17 01:53:24.053984 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.053991 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.053998 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.054005 | orchestrator | 2025-04-17 01:53:24.054011 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-17 01:53:24.054039 | orchestrator | Thursday 17 April 2025 01:50:56 +0000 (0:00:02.348) 0:04:22.455 ******** 2025-04-17 01:53:24.054047 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.054053 | orchestrator | 2025-04-17 01:53:24.054060 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-17 01:53:24.054067 | orchestrator | Thursday 17 April 2025 01:50:58 +0000 (0:00:01.628) 0:04:24.084 ******** 2025-04-17 01:53:24.054074 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-17 01:53:24.054082 | orchestrator | 2025-04-17 01:53:24.054093 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-17 01:53:24.054100 | orchestrator | Thursday 17 April 2025 01:50:59 +0000 (0:00:01.241) 0:04:25.325 ******** 2025-04-17 01:53:24.054125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-17 01:53:24.054134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-17 01:53:24.054142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-17 01:53:24.054149 | orchestrator | 2025-04-17 01:53:24.054156 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-17 01:53:24.054163 | orchestrator | Thursday 17 April 2025 01:51:03 +0000 (0:00:04.110) 0:04:29.435 ******** 2025-04-17 01:53:24.054170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054177 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054203 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054217 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054224 | orchestrator | 2025-04-17 01:53:24.054231 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-17 01:53:24.054238 | orchestrator | Thursday 17 April 2025 01:51:05 +0000 (0:00:01.483) 0:04:30.919 ******** 2025-04-17 01:53:24.054245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-17 01:53:24.054252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-17 01:53:24.054260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-17 01:53:24.054267 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-17 01:53:24.054301 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-17 01:53:24.054316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-17 01:53:24.054323 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054330 | orchestrator | 2025-04-17 01:53:24.054337 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-17 01:53:24.054344 | orchestrator | Thursday 17 April 2025 01:51:07 +0000 (0:00:01.932) 0:04:32.852 ******** 2025-04-17 01:53:24.054351 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.054358 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.054365 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.054371 | orchestrator | 2025-04-17 01:53:24.054378 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-17 01:53:24.054385 | orchestrator | Thursday 17 April 2025 01:51:10 +0000 (0:00:02.852) 0:04:35.704 ******** 2025-04-17 01:53:24.054392 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.054399 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.054406 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.054413 | orchestrator | 2025-04-17 01:53:24.054420 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-17 01:53:24.054431 | orchestrator | Thursday 17 April 2025 01:51:13 +0000 (0:00:03.589) 0:04:39.294 ******** 2025-04-17 01:53:24.054442 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-04-17 01:53:24.054449 | orchestrator | 2025-04-17 01:53:24.054456 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-17 01:53:24.054463 | orchestrator | Thursday 17 April 2025 01:51:15 +0000 (0:00:01.291) 0:04:40.586 ******** 2025-04-17 01:53:24.054470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054478 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054492 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054506 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054513 | orchestrator | 2025-04-17 01:53:24.054520 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-17 01:53:24.054527 | orchestrator | Thursday 17 April 2025 01:51:16 +0000 (0:00:01.556) 0:04:42.143 ******** 2025-04-17 01:53:24.054549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054558 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054575 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-17 01:53:24.054597 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054604 | orchestrator | 2025-04-17 01:53:24.054611 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-17 01:53:24.054618 | orchestrator | Thursday 17 April 2025 01:51:18 +0000 (0:00:01.646) 0:04:43.789 ******** 2025-04-17 01:53:24.054625 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054632 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054639 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054646 | orchestrator | 2025-04-17 01:53:24.054652 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-17 01:53:24.054659 | orchestrator | Thursday 17 April 2025 01:51:20 +0000 (0:00:02.073) 0:04:45.863 ******** 2025-04-17 01:53:24.054666 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.054674 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.054680 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.054691 | orchestrator | 2025-04-17 01:53:24.054698 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-17 01:53:24.054705 | orchestrator | Thursday 17 April 2025 01:51:23 +0000 (0:00:02.822) 0:04:48.685 ******** 2025-04-17 01:53:24.054711 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.054718 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.054725 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.054732 | orchestrator | 2025-04-17 01:53:24.054739 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-17 01:53:24.054746 | orchestrator | Thursday 17 April 2025 01:51:26 +0000 (0:00:03.573) 0:04:52.259 ******** 2025-04-17 01:53:24.054753 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-17 01:53:24.054760 | orchestrator | 2025-04-17 01:53:24.054766 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-17 01:53:24.054773 | orchestrator | Thursday 17 April 2025 01:51:28 +0000 (0:00:01.452) 0:04:53.711 ******** 2025-04-17 01:53:24.054780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-17 01:53:24.054787 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-17 01:53:24.054814 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-17 01:53:24.054853 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054860 | orchestrator | 2025-04-17 01:53:24.054867 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-17 01:53:24.054874 | orchestrator | Thursday 17 April 2025 01:51:30 +0000 (0:00:01.830) 0:04:55.541 ******** 2025-04-17 01:53:24.054881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-17 01:53:24.054889 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-17 01:53:24.054903 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-17 01:53:24.054917 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054924 | orchestrator | 2025-04-17 01:53:24.054931 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-17 01:53:24.054938 | orchestrator | Thursday 17 April 2025 01:51:31 +0000 (0:00:01.818) 0:04:57.360 ******** 2025-04-17 01:53:24.054944 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.054951 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.054958 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.054965 | orchestrator | 2025-04-17 01:53:24.054972 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-17 01:53:24.054979 | orchestrator | Thursday 17 April 2025 01:51:33 +0000 (0:00:01.680) 0:04:59.040 ******** 2025-04-17 01:53:24.054985 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.054992 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.054999 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.055006 | orchestrator | 2025-04-17 01:53:24.055013 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-17 01:53:24.055020 | orchestrator | Thursday 17 April 2025 01:51:35 +0000 (0:00:02.399) 0:05:01.440 ******** 2025-04-17 01:53:24.055026 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.055033 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.055040 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.055047 | orchestrator | 2025-04-17 01:53:24.055054 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-17 01:53:24.055065 | orchestrator | Thursday 17 April 2025 01:51:39 +0000 (0:00:03.379) 0:05:04.820 ******** 2025-04-17 01:53:24.055076 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.055083 | orchestrator | 2025-04-17 01:53:24.055090 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-17 01:53:24.055097 | orchestrator | Thursday 17 April 2025 01:51:41 +0000 (0:00:01.654) 0:05:06.474 ******** 2025-04-17 01:53:24.055127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.055137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-17 01:53:24.055144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.055172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.055200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.055210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-17 01:53:24.055225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-17 01:53:24.055233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.055321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.055329 | orchestrator | 2025-04-17 01:53:24.055336 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-17 01:53:24.055343 | orchestrator | Thursday 17 April 2025 01:51:45 +0000 (0:00:04.393) 0:05:10.867 ******** 2025-04-17 01:53:24.055350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.055358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-17 01:53:24.055371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.055410 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.055423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.055431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-17 01:53:24.055438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.055465 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.055494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.055503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-17 01:53:24.055510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-17 01:53:24.055530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-17 01:53:24.055537 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.055544 | orchestrator | 2025-04-17 01:53:24.055551 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-17 01:53:24.055558 | orchestrator | Thursday 17 April 2025 01:51:46 +0000 (0:00:00.923) 0:05:11.791 ******** 2025-04-17 01:53:24.055565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-17 01:53:24.055573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-17 01:53:24.055580 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.055587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-17 01:53:24.055594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-17 01:53:24.055602 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.055624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-17 01:53:24.055632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-17 01:53:24.055639 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.055646 | orchestrator | 2025-04-17 01:53:24.055653 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-17 01:53:24.055660 | orchestrator | Thursday 17 April 2025 01:51:47 +0000 (0:00:01.293) 0:05:13.085 ******** 2025-04-17 01:53:24.055667 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.055674 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.055681 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.055688 | orchestrator | 2025-04-17 01:53:24.055695 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-17 01:53:24.055702 | orchestrator | Thursday 17 April 2025 01:51:49 +0000 (0:00:01.397) 0:05:14.483 ******** 2025-04-17 01:53:24.055708 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.055715 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.055722 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.055729 | orchestrator | 2025-04-17 01:53:24.055736 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-17 01:53:24.055747 | orchestrator | Thursday 17 April 2025 01:51:51 +0000 (0:00:02.329) 0:05:16.812 ******** 2025-04-17 01:53:24.055754 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.055761 | orchestrator | 2025-04-17 01:53:24.055768 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-17 01:53:24.055775 | orchestrator | Thursday 17 April 2025 01:51:53 +0000 (0:00:01.668) 0:05:18.481 ******** 2025-04-17 01:53:24.055782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:53:24.055796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:53:24.055843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:53:24.055871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:53:24.055886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:53:24.055901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:53:24.055908 | orchestrator | 2025-04-17 01:53:24.055915 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-17 01:53:24.055923 | orchestrator | Thursday 17 April 2025 01:51:59 +0000 (0:00:06.255) 0:05:24.737 ******** 2025-04-17 01:53:24.055946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:53:24.055955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:53:24.055972 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.055979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:53:24.055987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:53:24.055994 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.056001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:53:24.056030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:53:24.056046 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.056053 | orchestrator | 2025-04-17 01:53:24.056060 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-17 01:53:24.056067 | orchestrator | Thursday 17 April 2025 01:52:00 +0000 (0:00:00.870) 0:05:25.607 ******** 2025-04-17 01:53:24.056074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-17 01:53:24.056081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-17 01:53:24.056088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-17 01:53:24.056095 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.056103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-17 01:53:24.056110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-17 01:53:24.056117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-17 01:53:24.056124 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.056134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-17 01:53:24.056141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-17 01:53:24.056148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-17 01:53:24.056155 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.056162 | orchestrator | 2025-04-17 01:53:24.056169 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-17 01:53:24.056176 | orchestrator | Thursday 17 April 2025 01:52:01 +0000 (0:00:01.290) 0:05:26.898 ******** 2025-04-17 01:53:24.056183 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.056189 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.056196 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.056203 | orchestrator | 2025-04-17 01:53:24.056210 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-17 01:53:24.056217 | orchestrator | Thursday 17 April 2025 01:52:01 +0000 (0:00:00.365) 0:05:27.263 ******** 2025-04-17 01:53:24.056223 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.056251 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.056258 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.056265 | orchestrator | 2025-04-17 01:53:24.056272 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-17 01:53:24.056279 | orchestrator | Thursday 17 April 2025 01:52:03 +0000 (0:00:01.295) 0:05:28.559 ******** 2025-04-17 01:53:24.056302 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.056310 | orchestrator | 2025-04-17 01:53:24.056317 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-17 01:53:24.056323 | orchestrator | Thursday 17 April 2025 01:52:04 +0000 (0:00:01.550) 0:05:30.109 ******** 2025-04-17 01:53:24.056329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-17 01:53:24.056336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-17 01:53:24.056342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-17 01:53:24.056395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-17 01:53:24.056402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-17 01:53:24.056428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-17 01:53:24.056439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-17 01:53:24.056486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-17 01:53:24.056493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-17 01:53:24.056550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-17 01:53:24.056557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-17 01:53:24.056602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-17 01:53:24.056608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056648 | orchestrator | 2025-04-17 01:53:24.056654 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-17 01:53:24.056661 | orchestrator | Thursday 17 April 2025 01:52:08 +0000 (0:00:04.103) 0:05:34.213 ******** 2025-04-17 01:53:24.056667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-17 01:53:24.056674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-17 01:53:24.056680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-17 01:53:24.056717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-17 01:53:24.056724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056760 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.056769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-17 01:53:24.056776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-17 01:53:24.056782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-17 01:53:24.056831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-17 01:53:24.056837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056872 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.056879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-17 01:53:24.056885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-17 01:53:24.056891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-17 01:53:24.056928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-17 01:53:24.056935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-17 01:53:24.056963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-17 01:53:24.056969 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.056976 | orchestrator | 2025-04-17 01:53:24.056982 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-17 01:53:24.056991 | orchestrator | Thursday 17 April 2025 01:52:09 +0000 (0:00:01.118) 0:05:35.331 ******** 2025-04-17 01:53:24.056998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-17 01:53:24.057008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-17 01:53:24.057014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-17 01:53:24.057021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-17 01:53:24.057028 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-17 01:53:24.057041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-17 01:53:24.057047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-17 01:53:24.057054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-17 01:53:24.057060 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-17 01:53:24.057076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-17 01:53:24.057083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-17 01:53:24.057091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-17 01:53:24.057098 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057104 | orchestrator | 2025-04-17 01:53:24.057110 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-17 01:53:24.057116 | orchestrator | Thursday 17 April 2025 01:52:11 +0000 (0:00:01.240) 0:05:36.572 ******** 2025-04-17 01:53:24.057122 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057128 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057134 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057143 | orchestrator | 2025-04-17 01:53:24.057149 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-17 01:53:24.057155 | orchestrator | Thursday 17 April 2025 01:52:11 +0000 (0:00:00.552) 0:05:37.125 ******** 2025-04-17 01:53:24.057165 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057171 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057177 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057183 | orchestrator | 2025-04-17 01:53:24.057189 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-17 01:53:24.057195 | orchestrator | Thursday 17 April 2025 01:52:13 +0000 (0:00:01.446) 0:05:38.572 ******** 2025-04-17 01:53:24.057201 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.057207 | orchestrator | 2025-04-17 01:53:24.057213 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-17 01:53:24.057219 | orchestrator | Thursday 17 April 2025 01:52:14 +0000 (0:00:01.549) 0:05:40.121 ******** 2025-04-17 01:53:24.057226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:53:24.057232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:53:24.057246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-17 01:53:24.057253 | orchestrator | 2025-04-17 01:53:24.057259 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-17 01:53:24.057271 | orchestrator | Thursday 17 April 2025 01:52:17 +0000 (0:00:02.749) 0:05:42.870 ******** 2025-04-17 01:53:24.057278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-17 01:53:24.057284 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-17 01:53:24.057297 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-17 01:53:24.057314 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057321 | orchestrator | 2025-04-17 01:53:24.057327 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-17 01:53:24.057333 | orchestrator | Thursday 17 April 2025 01:52:18 +0000 (0:00:00.673) 0:05:43.543 ******** 2025-04-17 01:53:24.057339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-17 01:53:24.057345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-17 01:53:24.057352 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057358 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-17 01:53:24.057377 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057383 | orchestrator | 2025-04-17 01:53:24.057389 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-17 01:53:24.057396 | orchestrator | Thursday 17 April 2025 01:52:19 +0000 (0:00:01.091) 0:05:44.635 ******** 2025-04-17 01:53:24.057402 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057408 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057414 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057420 | orchestrator | 2025-04-17 01:53:24.057426 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-17 01:53:24.057432 | orchestrator | Thursday 17 April 2025 01:52:19 +0000 (0:00:00.469) 0:05:45.104 ******** 2025-04-17 01:53:24.057438 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057444 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057450 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057456 | orchestrator | 2025-04-17 01:53:24.057462 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-17 01:53:24.057468 | orchestrator | Thursday 17 April 2025 01:52:21 +0000 (0:00:01.707) 0:05:46.812 ******** 2025-04-17 01:53:24.057474 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:53:24.057480 | orchestrator | 2025-04-17 01:53:24.057486 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-17 01:53:24.057492 | orchestrator | Thursday 17 April 2025 01:52:23 +0000 (0:00:01.909) 0:05:48.721 ******** 2025-04-17 01:53:24.057499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.057506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.057512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.057525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.057537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.057544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-17 01:53:24.057550 | orchestrator | 2025-04-17 01:53:24.057556 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-17 01:53:24.057562 | orchestrator | Thursday 17 April 2025 01:52:30 +0000 (0:00:07.497) 0:05:56.219 ******** 2025-04-17 01:53:24.057569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.057582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.057588 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.057606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.057612 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.057634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-17 01:53:24.057640 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057647 | orchestrator | 2025-04-17 01:53:24.057653 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-17 01:53:24.057659 | orchestrator | Thursday 17 April 2025 01:52:32 +0000 (0:00:01.423) 0:05:57.643 ******** 2025-04-17 01:53:24.057665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057690 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057780 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-17 01:53:24.057827 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057834 | orchestrator | 2025-04-17 01:53:24.057840 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-17 01:53:24.057846 | orchestrator | Thursday 17 April 2025 01:52:33 +0000 (0:00:01.387) 0:05:59.030 ******** 2025-04-17 01:53:24.057852 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.057858 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.057864 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.057870 | orchestrator | 2025-04-17 01:53:24.057876 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-17 01:53:24.057882 | orchestrator | Thursday 17 April 2025 01:52:35 +0000 (0:00:01.489) 0:06:00.520 ******** 2025-04-17 01:53:24.057888 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.057895 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.057901 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.057907 | orchestrator | 2025-04-17 01:53:24.057913 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-17 01:53:24.057922 | orchestrator | Thursday 17 April 2025 01:52:37 +0000 (0:00:02.423) 0:06:02.943 ******** 2025-04-17 01:53:24.057928 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057935 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057943 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057950 | orchestrator | 2025-04-17 01:53:24.057956 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-17 01:53:24.057962 | orchestrator | Thursday 17 April 2025 01:52:37 +0000 (0:00:00.302) 0:06:03.245 ******** 2025-04-17 01:53:24.057968 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.057974 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.057980 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.057986 | orchestrator | 2025-04-17 01:53:24.057993 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-17 01:53:24.057999 | orchestrator | Thursday 17 April 2025 01:52:38 +0000 (0:00:00.559) 0:06:03.804 ******** 2025-04-17 01:53:24.058005 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058011 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058040 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058048 | orchestrator | 2025-04-17 01:53:24.058054 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-17 01:53:24.058060 | orchestrator | Thursday 17 April 2025 01:52:38 +0000 (0:00:00.556) 0:06:04.361 ******** 2025-04-17 01:53:24.058066 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058072 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058078 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058084 | orchestrator | 2025-04-17 01:53:24.058091 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-17 01:53:24.058097 | orchestrator | Thursday 17 April 2025 01:52:39 +0000 (0:00:00.524) 0:06:04.886 ******** 2025-04-17 01:53:24.058103 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058109 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058119 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058125 | orchestrator | 2025-04-17 01:53:24.058132 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-17 01:53:24.058138 | orchestrator | Thursday 17 April 2025 01:52:39 +0000 (0:00:00.302) 0:06:05.188 ******** 2025-04-17 01:53:24.058144 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058150 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058156 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058162 | orchestrator | 2025-04-17 01:53:24.058168 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-17 01:53:24.058174 | orchestrator | Thursday 17 April 2025 01:52:40 +0000 (0:00:00.989) 0:06:06.177 ******** 2025-04-17 01:53:24.058181 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058187 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058193 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058199 | orchestrator | 2025-04-17 01:53:24.058205 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-17 01:53:24.058211 | orchestrator | Thursday 17 April 2025 01:52:41 +0000 (0:00:00.885) 0:06:07.063 ******** 2025-04-17 01:53:24.058217 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058223 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058229 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058235 | orchestrator | 2025-04-17 01:53:24.058242 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-17 01:53:24.058248 | orchestrator | Thursday 17 April 2025 01:52:41 +0000 (0:00:00.322) 0:06:07.385 ******** 2025-04-17 01:53:24.058254 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058264 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058271 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058277 | orchestrator | 2025-04-17 01:53:24.058283 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-17 01:53:24.058289 | orchestrator | Thursday 17 April 2025 01:52:43 +0000 (0:00:01.203) 0:06:08.588 ******** 2025-04-17 01:53:24.058295 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058302 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058308 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058314 | orchestrator | 2025-04-17 01:53:24.058320 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-17 01:53:24.058326 | orchestrator | Thursday 17 April 2025 01:52:44 +0000 (0:00:01.161) 0:06:09.750 ******** 2025-04-17 01:53:24.058332 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058338 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058344 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058350 | orchestrator | 2025-04-17 01:53:24.058356 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-17 01:53:24.058362 | orchestrator | Thursday 17 April 2025 01:52:45 +0000 (0:00:00.951) 0:06:10.702 ******** 2025-04-17 01:53:24.058368 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.058374 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.058381 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.058387 | orchestrator | 2025-04-17 01:53:24.058393 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-17 01:53:24.058399 | orchestrator | Thursday 17 April 2025 01:52:53 +0000 (0:00:08.542) 0:06:19.244 ******** 2025-04-17 01:53:24.058405 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058411 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058417 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058423 | orchestrator | 2025-04-17 01:53:24.058429 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-17 01:53:24.058436 | orchestrator | Thursday 17 April 2025 01:52:54 +0000 (0:00:01.009) 0:06:20.253 ******** 2025-04-17 01:53:24.058442 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.058448 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.058454 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.058460 | orchestrator | 2025-04-17 01:53:24.058466 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-17 01:53:24.058476 | orchestrator | Thursday 17 April 2025 01:53:05 +0000 (0:00:10.959) 0:06:31.213 ******** 2025-04-17 01:53:24.058482 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058488 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058494 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058501 | orchestrator | 2025-04-17 01:53:24.058507 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-17 01:53:24.058513 | orchestrator | Thursday 17 April 2025 01:53:06 +0000 (0:00:00.739) 0:06:31.952 ******** 2025-04-17 01:53:24.058519 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:53:24.058525 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:53:24.058531 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:53:24.058537 | orchestrator | 2025-04-17 01:53:24.058546 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-17 01:53:24.058555 | orchestrator | Thursday 17 April 2025 01:53:15 +0000 (0:00:09.361) 0:06:41.314 ******** 2025-04-17 01:53:24.058562 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058568 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058574 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058580 | orchestrator | 2025-04-17 01:53:24.058586 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-17 01:53:24.058592 | orchestrator | Thursday 17 April 2025 01:53:16 +0000 (0:00:00.589) 0:06:41.904 ******** 2025-04-17 01:53:24.058598 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058605 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058611 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058617 | orchestrator | 2025-04-17 01:53:24.058623 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-17 01:53:24.058629 | orchestrator | Thursday 17 April 2025 01:53:17 +0000 (0:00:00.578) 0:06:42.483 ******** 2025-04-17 01:53:24.058635 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058641 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058647 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058653 | orchestrator | 2025-04-17 01:53:24.058660 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-17 01:53:24.058666 | orchestrator | Thursday 17 April 2025 01:53:17 +0000 (0:00:00.357) 0:06:42.840 ******** 2025-04-17 01:53:24.058672 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058678 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058684 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058690 | orchestrator | 2025-04-17 01:53:24.058697 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-17 01:53:24.058703 | orchestrator | Thursday 17 April 2025 01:53:17 +0000 (0:00:00.575) 0:06:43.415 ******** 2025-04-17 01:53:24.058709 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058715 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058721 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058727 | orchestrator | 2025-04-17 01:53:24.058733 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-17 01:53:24.058740 | orchestrator | Thursday 17 April 2025 01:53:18 +0000 (0:00:00.571) 0:06:43.987 ******** 2025-04-17 01:53:24.058746 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:53:24.058752 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:53:24.058758 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:53:24.058764 | orchestrator | 2025-04-17 01:53:24.058770 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-17 01:53:24.058776 | orchestrator | Thursday 17 April 2025 01:53:18 +0000 (0:00:00.316) 0:06:44.303 ******** 2025-04-17 01:53:24.058782 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058788 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058794 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058800 | orchestrator | 2025-04-17 01:53:24.058816 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-17 01:53:24.058828 | orchestrator | Thursday 17 April 2025 01:53:20 +0000 (0:00:01.203) 0:06:45.507 ******** 2025-04-17 01:53:24.058835 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:53:24.058841 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:53:24.058847 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:53:24.058853 | orchestrator | 2025-04-17 01:53:24.058859 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:53:24.058866 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-17 01:53:24.058872 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-17 01:53:24.058879 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-17 01:53:24.058885 | orchestrator | 2025-04-17 01:53:24.058891 | orchestrator | 2025-04-17 01:53:24.058897 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:53:24.058903 | orchestrator | Thursday 17 April 2025 01:53:21 +0000 (0:00:01.141) 0:06:46.649 ******** 2025-04-17 01:53:24.058909 | orchestrator | =============================================================================== 2025-04-17 01:53:24.058916 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.96s 2025-04-17 01:53:24.058922 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.36s 2025-04-17 01:53:24.058928 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.54s 2025-04-17 01:53:24.058934 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.50s 2025-04-17 01:53:24.058940 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.30s 2025-04-17 01:53:24.058946 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.26s 2025-04-17 01:53:24.058952 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.16s 2025-04-17 01:53:24.058959 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.14s 2025-04-17 01:53:24.058965 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.01s 2025-04-17 01:53:24.058971 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.99s 2025-04-17 01:53:24.058977 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.87s 2025-04-17 01:53:24.058983 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.63s 2025-04-17 01:53:24.058989 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.48s 2025-04-17 01:53:24.058998 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.40s 2025-04-17 01:53:24.059007 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.39s 2025-04-17 01:53:27.070943 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.30s 2025-04-17 01:53:27.071076 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.17s 2025-04-17 01:53:27.071096 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.11s 2025-04-17 01:53:27.071111 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.10s 2025-04-17 01:53:27.071126 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.84s 2025-04-17 01:53:27.071140 | orchestrator | 2025-04-17 01:53:24 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:27.071176 | orchestrator | 2025-04-17 01:53:27 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:27.071518 | orchestrator | 2025-04-17 01:53:27 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:27.073421 | orchestrator | 2025-04-17 01:53:27 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:27.074134 | orchestrator | 2025-04-17 01:53:27 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:30.114410 | orchestrator | 2025-04-17 01:53:27 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:30.114566 | orchestrator | 2025-04-17 01:53:30 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:30.114888 | orchestrator | 2025-04-17 01:53:30 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:30.115781 | orchestrator | 2025-04-17 01:53:30 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:30.116560 | orchestrator | 2025-04-17 01:53:30 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:30.116996 | orchestrator | 2025-04-17 01:53:30 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:33.141006 | orchestrator | 2025-04-17 01:53:33 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:33.141726 | orchestrator | 2025-04-17 01:53:33 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:33.141770 | orchestrator | 2025-04-17 01:53:33 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:33.142205 | orchestrator | 2025-04-17 01:53:33 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:36.186223 | orchestrator | 2025-04-17 01:53:33 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:36.186344 | orchestrator | 2025-04-17 01:53:36 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:36.189532 | orchestrator | 2025-04-17 01:53:36 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:36.191547 | orchestrator | 2025-04-17 01:53:36 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:36.193175 | orchestrator | 2025-04-17 01:53:36 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:39.229414 | orchestrator | 2025-04-17 01:53:36 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:39.229537 | orchestrator | 2025-04-17 01:53:39 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:39.229717 | orchestrator | 2025-04-17 01:53:39 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:39.232912 | orchestrator | 2025-04-17 01:53:39 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:39.233399 | orchestrator | 2025-04-17 01:53:39 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:42.268350 | orchestrator | 2025-04-17 01:53:39 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:42.268522 | orchestrator | 2025-04-17 01:53:42 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:42.268974 | orchestrator | 2025-04-17 01:53:42 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:42.269033 | orchestrator | 2025-04-17 01:53:42 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:42.270721 | orchestrator | 2025-04-17 01:53:42 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:45.316866 | orchestrator | 2025-04-17 01:53:42 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:45.317020 | orchestrator | 2025-04-17 01:53:45 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:45.318899 | orchestrator | 2025-04-17 01:53:45 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:48.348840 | orchestrator | 2025-04-17 01:53:45 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:48.349048 | orchestrator | 2025-04-17 01:53:45 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:48.349070 | orchestrator | 2025-04-17 01:53:45 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:48.349100 | orchestrator | 2025-04-17 01:53:48 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:48.349654 | orchestrator | 2025-04-17 01:53:48 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:48.349685 | orchestrator | 2025-04-17 01:53:48 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:48.350364 | orchestrator | 2025-04-17 01:53:48 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:51.382677 | orchestrator | 2025-04-17 01:53:48 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:51.382754 | orchestrator | 2025-04-17 01:53:51 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:51.383880 | orchestrator | 2025-04-17 01:53:51 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:51.386291 | orchestrator | 2025-04-17 01:53:51 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:51.387466 | orchestrator | 2025-04-17 01:53:51 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:51.388161 | orchestrator | 2025-04-17 01:53:51 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:54.437821 | orchestrator | 2025-04-17 01:53:54 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:54.441139 | orchestrator | 2025-04-17 01:53:54 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:54.442960 | orchestrator | 2025-04-17 01:53:54 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:54.444689 | orchestrator | 2025-04-17 01:53:54 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:53:54.444742 | orchestrator | 2025-04-17 01:53:54 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:53:57.486297 | orchestrator | 2025-04-17 01:53:57 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:53:57.487463 | orchestrator | 2025-04-17 01:53:57 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:53:57.489146 | orchestrator | 2025-04-17 01:53:57 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:53:57.490827 | orchestrator | 2025-04-17 01:53:57 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:00.535224 | orchestrator | 2025-04-17 01:53:57 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:00.535326 | orchestrator | 2025-04-17 01:54:00 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:00.536049 | orchestrator | 2025-04-17 01:54:00 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:00.537434 | orchestrator | 2025-04-17 01:54:00 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:00.538531 | orchestrator | 2025-04-17 01:54:00 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:03.603352 | orchestrator | 2025-04-17 01:54:00 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:03.603581 | orchestrator | 2025-04-17 01:54:03 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:03.604973 | orchestrator | 2025-04-17 01:54:03 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:03.609305 | orchestrator | 2025-04-17 01:54:03 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:03.612705 | orchestrator | 2025-04-17 01:54:03 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:03.613455 | orchestrator | 2025-04-17 01:54:03 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:06.672960 | orchestrator | 2025-04-17 01:54:06 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:06.673627 | orchestrator | 2025-04-17 01:54:06 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:06.674977 | orchestrator | 2025-04-17 01:54:06 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:06.676379 | orchestrator | 2025-04-17 01:54:06 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:09.735149 | orchestrator | 2025-04-17 01:54:06 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:09.735270 | orchestrator | 2025-04-17 01:54:09 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:09.737509 | orchestrator | 2025-04-17 01:54:09 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:09.739279 | orchestrator | 2025-04-17 01:54:09 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:09.740961 | orchestrator | 2025-04-17 01:54:09 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:12.790196 | orchestrator | 2025-04-17 01:54:09 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:12.790369 | orchestrator | 2025-04-17 01:54:12 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:12.791277 | orchestrator | 2025-04-17 01:54:12 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:12.797929 | orchestrator | 2025-04-17 01:54:12 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:12.798976 | orchestrator | 2025-04-17 01:54:12 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:15.850194 | orchestrator | 2025-04-17 01:54:12 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:15.850306 | orchestrator | 2025-04-17 01:54:15 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:15.852468 | orchestrator | 2025-04-17 01:54:15 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:15.856250 | orchestrator | 2025-04-17 01:54:15 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:15.858770 | orchestrator | 2025-04-17 01:54:15 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:15.859415 | orchestrator | 2025-04-17 01:54:15 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:18.913015 | orchestrator | 2025-04-17 01:54:18 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:18.914312 | orchestrator | 2025-04-17 01:54:18 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:18.915903 | orchestrator | 2025-04-17 01:54:18 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:18.919311 | orchestrator | 2025-04-17 01:54:18 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:18.919431 | orchestrator | 2025-04-17 01:54:18 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:21.975158 | orchestrator | 2025-04-17 01:54:21 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:21.976012 | orchestrator | 2025-04-17 01:54:21 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:21.978611 | orchestrator | 2025-04-17 01:54:21 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:21.978653 | orchestrator | 2025-04-17 01:54:21 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:25.014779 | orchestrator | 2025-04-17 01:54:21 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:25.014926 | orchestrator | 2025-04-17 01:54:25 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:25.015754 | orchestrator | 2025-04-17 01:54:25 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:25.017866 | orchestrator | 2025-04-17 01:54:25 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:28.068605 | orchestrator | 2025-04-17 01:54:25 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:28.068698 | orchestrator | 2025-04-17 01:54:25 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:28.068756 | orchestrator | 2025-04-17 01:54:28 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:28.069071 | orchestrator | 2025-04-17 01:54:28 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:28.069842 | orchestrator | 2025-04-17 01:54:28 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:28.070874 | orchestrator | 2025-04-17 01:54:28 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:28.073063 | orchestrator | 2025-04-17 01:54:28 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:31.126333 | orchestrator | 2025-04-17 01:54:31 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:31.127239 | orchestrator | 2025-04-17 01:54:31 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:31.128602 | orchestrator | 2025-04-17 01:54:31 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:31.129766 | orchestrator | 2025-04-17 01:54:31 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:34.169456 | orchestrator | 2025-04-17 01:54:31 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:34.169611 | orchestrator | 2025-04-17 01:54:34 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:34.176847 | orchestrator | 2025-04-17 01:54:34 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:34.177877 | orchestrator | 2025-04-17 01:54:34 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:34.177909 | orchestrator | 2025-04-17 01:54:34 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:34.178781 | orchestrator | 2025-04-17 01:54:34 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:37.223414 | orchestrator | 2025-04-17 01:54:37 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:37.224920 | orchestrator | 2025-04-17 01:54:37 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:37.226629 | orchestrator | 2025-04-17 01:54:37 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:37.228109 | orchestrator | 2025-04-17 01:54:37 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:40.280524 | orchestrator | 2025-04-17 01:54:37 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:40.280664 | orchestrator | 2025-04-17 01:54:40 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:40.281918 | orchestrator | 2025-04-17 01:54:40 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:40.284287 | orchestrator | 2025-04-17 01:54:40 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:40.285505 | orchestrator | 2025-04-17 01:54:40 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:43.340326 | orchestrator | 2025-04-17 01:54:40 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:43.340479 | orchestrator | 2025-04-17 01:54:43 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:43.343250 | orchestrator | 2025-04-17 01:54:43 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:43.346971 | orchestrator | 2025-04-17 01:54:43 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:43.348897 | orchestrator | 2025-04-17 01:54:43 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:46.390914 | orchestrator | 2025-04-17 01:54:43 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:46.391038 | orchestrator | 2025-04-17 01:54:46 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:46.394889 | orchestrator | 2025-04-17 01:54:46 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:46.395208 | orchestrator | 2025-04-17 01:54:46 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:49.440735 | orchestrator | 2025-04-17 01:54:46 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:49.440851 | orchestrator | 2025-04-17 01:54:46 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:49.440880 | orchestrator | 2025-04-17 01:54:49 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:49.442355 | orchestrator | 2025-04-17 01:54:49 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:49.444174 | orchestrator | 2025-04-17 01:54:49 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:49.445762 | orchestrator | 2025-04-17 01:54:49 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:52.499083 | orchestrator | 2025-04-17 01:54:49 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:52.499205 | orchestrator | 2025-04-17 01:54:52 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:52.499963 | orchestrator | 2025-04-17 01:54:52 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:52.499999 | orchestrator | 2025-04-17 01:54:52 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:52.500356 | orchestrator | 2025-04-17 01:54:52 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:55.561902 | orchestrator | 2025-04-17 01:54:52 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:55.562113 | orchestrator | 2025-04-17 01:54:55 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:55.563755 | orchestrator | 2025-04-17 01:54:55 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:55.565402 | orchestrator | 2025-04-17 01:54:55 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:55.567034 | orchestrator | 2025-04-17 01:54:55 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:54:58.612815 | orchestrator | 2025-04-17 01:54:55 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:54:58.612986 | orchestrator | 2025-04-17 01:54:58 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:54:58.614925 | orchestrator | 2025-04-17 01:54:58 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:54:58.616844 | orchestrator | 2025-04-17 01:54:58 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:54:58.618795 | orchestrator | 2025-04-17 01:54:58 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:01.680479 | orchestrator | 2025-04-17 01:54:58 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:01.680602 | orchestrator | 2025-04-17 01:55:01 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:01.681958 | orchestrator | 2025-04-17 01:55:01 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:01.683129 | orchestrator | 2025-04-17 01:55:01 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:01.685201 | orchestrator | 2025-04-17 01:55:01 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:04.739633 | orchestrator | 2025-04-17 01:55:01 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:04.739839 | orchestrator | 2025-04-17 01:55:04 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:04.743699 | orchestrator | 2025-04-17 01:55:04 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:04.745893 | orchestrator | 2025-04-17 01:55:04 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:04.759307 | orchestrator | 2025-04-17 01:55:04 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:07.813486 | orchestrator | 2025-04-17 01:55:04 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:07.813620 | orchestrator | 2025-04-17 01:55:07 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:07.816264 | orchestrator | 2025-04-17 01:55:07 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:07.818589 | orchestrator | 2025-04-17 01:55:07 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:07.820252 | orchestrator | 2025-04-17 01:55:07 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:07.821374 | orchestrator | 2025-04-17 01:55:07 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:10.878171 | orchestrator | 2025-04-17 01:55:10 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:10.881018 | orchestrator | 2025-04-17 01:55:10 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:10.883222 | orchestrator | 2025-04-17 01:55:10 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:10.885480 | orchestrator | 2025-04-17 01:55:10 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:13.935542 | orchestrator | 2025-04-17 01:55:10 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:13.935724 | orchestrator | 2025-04-17 01:55:13 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:13.938361 | orchestrator | 2025-04-17 01:55:13 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:13.942131 | orchestrator | 2025-04-17 01:55:13 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:13.943561 | orchestrator | 2025-04-17 01:55:13 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:16.987118 | orchestrator | 2025-04-17 01:55:13 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:16.987276 | orchestrator | 2025-04-17 01:55:16 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:16.988062 | orchestrator | 2025-04-17 01:55:16 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:16.989605 | orchestrator | 2025-04-17 01:55:16 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:16.991258 | orchestrator | 2025-04-17 01:55:16 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:16.991370 | orchestrator | 2025-04-17 01:55:16 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:20.038519 | orchestrator | 2025-04-17 01:55:20 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:20.038720 | orchestrator | 2025-04-17 01:55:20 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:20.039758 | orchestrator | 2025-04-17 01:55:20 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:20.041061 | orchestrator | 2025-04-17 01:55:20 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:23.090906 | orchestrator | 2025-04-17 01:55:20 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:23.091044 | orchestrator | 2025-04-17 01:55:23 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:23.093988 | orchestrator | 2025-04-17 01:55:23 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:23.094139 | orchestrator | 2025-04-17 01:55:23 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state STARTED 2025-04-17 01:55:23.096415 | orchestrator | 2025-04-17 01:55:23 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:26.150393 | orchestrator | 2025-04-17 01:55:23 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:26.150540 | orchestrator | 2025-04-17 01:55:26 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:26.153319 | orchestrator | 2025-04-17 01:55:26 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:26.157005 | orchestrator | 2025-04-17 01:55:26 | INFO  | Task d1d95967-8d56-421c-9dbc-ef7a43aacc44 is in state SUCCESS 2025-04-17 01:55:26.159002 | orchestrator | 2025-04-17 01:55:26.159068 | orchestrator | 2025-04-17 01:55:26.159085 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:55:26.159101 | orchestrator | 2025-04-17 01:55:26.159115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:55:26.159129 | orchestrator | Thursday 17 April 2025 01:53:24 +0000 (0:00:00.312) 0:00:00.312 ******** 2025-04-17 01:55:26.159143 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:55:26.159184 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:55:26.159198 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:55:26.159222 | orchestrator | 2025-04-17 01:55:26.159237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:55:26.159252 | orchestrator | Thursday 17 April 2025 01:53:25 +0000 (0:00:00.381) 0:00:00.693 ******** 2025-04-17 01:55:26.159267 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-04-17 01:55:26.159282 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-04-17 01:55:26.159296 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-04-17 01:55:26.159310 | orchestrator | 2025-04-17 01:55:26.159324 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-04-17 01:55:26.159338 | orchestrator | 2025-04-17 01:55:26.159352 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-17 01:55:26.159366 | orchestrator | Thursday 17 April 2025 01:53:25 +0000 (0:00:00.289) 0:00:00.983 ******** 2025-04-17 01:55:26.159381 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:55:26.159395 | orchestrator | 2025-04-17 01:55:26.159409 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-04-17 01:55:26.159423 | orchestrator | Thursday 17 April 2025 01:53:26 +0000 (0:00:00.671) 0:00:01.654 ******** 2025-04-17 01:55:26.159437 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-17 01:55:26.159451 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-17 01:55:26.159465 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-17 01:55:26.159479 | orchestrator | 2025-04-17 01:55:26.159493 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-04-17 01:55:26.159506 | orchestrator | Thursday 17 April 2025 01:53:27 +0000 (0:00:00.727) 0:00:02.381 ******** 2025-04-17 01:55:26.159526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.159545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.159572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.159601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.159620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.159665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.159681 | orchestrator | 2025-04-17 01:55:26.159697 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-17 01:55:26.159713 | orchestrator | Thursday 17 April 2025 01:53:28 +0000 (0:00:01.412) 0:00:03.794 ******** 2025-04-17 01:55:26.159736 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:55:26.159752 | orchestrator | 2025-04-17 01:55:26.159767 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-04-17 01:55:26.159782 | orchestrator | Thursday 17 April 2025 01:53:29 +0000 (0:00:00.725) 0:00:04.520 ******** 2025-04-17 01:55:26.159822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.159840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.159857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.159873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.159905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.159923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.159939 | orchestrator | 2025-04-17 01:55:26.159955 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-04-17 01:55:26.159971 | orchestrator | Thursday 17 April 2025 01:53:32 +0000 (0:00:03.122) 0:00:07.642 ******** 2025-04-17 01:55:26.159987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:55:26.160004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:55:26.160026 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:55:26.160049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:55:26.160064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:55:26.160079 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:55:26.160094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:55:26.160109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:55:26.160135 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:55:26.160150 | orchestrator | 2025-04-17 01:55:26.160164 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-04-17 01:55:26.160178 | orchestrator | Thursday 17 April 2025 01:53:33 +0000 (0:00:01.027) 0:00:08.669 ******** 2025-04-17 01:55:26.160198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:55:26.160214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:55:26.160229 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:55:26.160243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:55:26.160258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:55:26.160279 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:55:26.160299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-17 01:55:26.160314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-17 01:55:26.160329 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:55:26.160343 | orchestrator | 2025-04-17 01:55:26.160357 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-04-17 01:55:26.160376 | orchestrator | Thursday 17 April 2025 01:53:34 +0000 (0:00:01.041) 0:00:09.711 ******** 2025-04-17 01:55:26.160391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.160406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.160427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.160449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.160465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.160479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.160501 | orchestrator | 2025-04-17 01:55:26.160515 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-04-17 01:55:26.160529 | orchestrator | Thursday 17 April 2025 01:53:36 +0000 (0:00:02.429) 0:00:12.141 ******** 2025-04-17 01:55:26.160543 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:55:26.160557 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:55:26.160570 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:55:26.160584 | orchestrator | 2025-04-17 01:55:26.160598 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-04-17 01:55:26.160612 | orchestrator | Thursday 17 April 2025 01:53:40 +0000 (0:00:03.552) 0:00:15.694 ******** 2025-04-17 01:55:26.160642 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:55:26.160657 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:55:26.160670 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:55:26.160684 | orchestrator | 2025-04-17 01:55:26.160697 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-04-17 01:55:26.160711 | orchestrator | Thursday 17 April 2025 01:53:42 +0000 (0:00:01.845) 0:00:17.539 ******** 2025-04-17 01:55:26.160734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.160750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.160765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-17 01:55:26.160786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.160807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.160822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-17 01:55:26.160838 | orchestrator | 2025-04-17 01:55:26.160852 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-17 01:55:26.160866 | orchestrator | Thursday 17 April 2025 01:53:45 +0000 (0:00:02.844) 0:00:20.383 ******** 2025-04-17 01:55:26.160880 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:55:26.160900 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:55:26.160914 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:55:26.160928 | orchestrator | 2025-04-17 01:55:26.160942 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-17 01:55:26.160955 | orchestrator | Thursday 17 April 2025 01:53:45 +0000 (0:00:00.455) 0:00:20.839 ******** 2025-04-17 01:55:26.160969 | orchestrator | 2025-04-17 01:55:26.160983 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-17 01:55:26.160996 | orchestrator | Thursday 17 April 2025 01:53:45 +0000 (0:00:00.199) 0:00:21.038 ******** 2025-04-17 01:55:26.161010 | orchestrator | 2025-04-17 01:55:26.161023 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-17 01:55:26.161037 | orchestrator | Thursday 17 April 2025 01:53:45 +0000 (0:00:00.060) 0:00:21.098 ******** 2025-04-17 01:55:26.161051 | orchestrator | 2025-04-17 01:55:26.161065 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-04-17 01:55:26.161078 | orchestrator | Thursday 17 April 2025 01:53:45 +0000 (0:00:00.063) 0:00:21.162 ******** 2025-04-17 01:55:26.161092 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:55:26.161106 | orchestrator | 2025-04-17 01:55:26.161119 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-04-17 01:55:26.161133 | orchestrator | Thursday 17 April 2025 01:53:46 +0000 (0:00:00.210) 0:00:21.372 ******** 2025-04-17 01:55:26.161147 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:55:26.161160 | orchestrator | 2025-04-17 01:55:26.161174 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-04-17 01:55:26.161188 | orchestrator | Thursday 17 April 2025 01:53:46 +0000 (0:00:00.583) 0:00:21.955 ******** 2025-04-17 01:55:26.161201 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:55:26.161215 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:55:26.161229 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:55:26.161242 | orchestrator | 2025-04-17 01:55:26.161256 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-04-17 01:55:26.161270 | orchestrator | Thursday 17 April 2025 01:54:20 +0000 (0:00:34.208) 0:00:56.163 ******** 2025-04-17 01:55:26.161284 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:55:26.161297 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:55:26.161311 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:55:26.161325 | orchestrator | 2025-04-17 01:55:26.161339 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-17 01:55:26.161352 | orchestrator | Thursday 17 April 2025 01:55:12 +0000 (0:00:52.164) 0:01:48.327 ******** 2025-04-17 01:55:26.161366 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:55:26.161380 | orchestrator | 2025-04-17 01:55:26.161394 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-04-17 01:55:26.161407 | orchestrator | Thursday 17 April 2025 01:55:13 +0000 (0:00:00.705) 0:01:49.033 ******** 2025-04-17 01:55:26.161421 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:55:26.161435 | orchestrator | 2025-04-17 01:55:26.161449 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-04-17 01:55:26.161462 | orchestrator | Thursday 17 April 2025 01:55:16 +0000 (0:00:02.626) 0:01:51.660 ******** 2025-04-17 01:55:26.161476 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:55:26.161489 | orchestrator | 2025-04-17 01:55:26.161503 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-04-17 01:55:26.161517 | orchestrator | Thursday 17 April 2025 01:55:18 +0000 (0:00:02.500) 0:01:54.160 ******** 2025-04-17 01:55:26.161531 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:55:26.161544 | orchestrator | 2025-04-17 01:55:26.161558 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-04-17 01:55:26.161577 | orchestrator | Thursday 17 April 2025 01:55:21 +0000 (0:00:02.847) 0:01:57.008 ******** 2025-04-17 01:55:26.161591 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:55:26.161604 | orchestrator | 2025-04-17 01:55:26.161648 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:55:29.213868 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-17 01:55:29.214076 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 01:55:29.214099 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-17 01:55:29.214114 | orchestrator | 2025-04-17 01:55:29.214128 | orchestrator | 2025-04-17 01:55:29.214143 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:55:29.214159 | orchestrator | Thursday 17 April 2025 01:55:24 +0000 (0:00:03.124) 0:02:00.133 ******** 2025-04-17 01:55:29.214173 | orchestrator | =============================================================================== 2025-04-17 01:55:29.214187 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 52.16s 2025-04-17 01:55:29.214201 | orchestrator | opensearch : Restart opensearch container ------------------------------ 34.21s 2025-04-17 01:55:29.214215 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.55s 2025-04-17 01:55:29.214229 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.12s 2025-04-17 01:55:29.214242 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.12s 2025-04-17 01:55:29.214256 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.85s 2025-04-17 01:55:29.214270 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.84s 2025-04-17 01:55:29.214283 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.63s 2025-04-17 01:55:29.214297 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.50s 2025-04-17 01:55:29.214311 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.43s 2025-04-17 01:55:29.214325 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.85s 2025-04-17 01:55:29.214338 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.41s 2025-04-17 01:55:29.214353 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.04s 2025-04-17 01:55:29.214369 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.03s 2025-04-17 01:55:29.214412 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.73s 2025-04-17 01:55:29.214429 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2025-04-17 01:55:29.214444 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-04-17 01:55:29.214458 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2025-04-17 01:55:29.214471 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.58s 2025-04-17 01:55:29.214485 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-04-17 01:55:29.214499 | orchestrator | 2025-04-17 01:55:26 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:29.214514 | orchestrator | 2025-04-17 01:55:26 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:29.214548 | orchestrator | 2025-04-17 01:55:29 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:29.215158 | orchestrator | 2025-04-17 01:55:29 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:29.216715 | orchestrator | 2025-04-17 01:55:29 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:32.271390 | orchestrator | 2025-04-17 01:55:29 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:32.271587 | orchestrator | 2025-04-17 01:55:32 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:32.274156 | orchestrator | 2025-04-17 01:55:32 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:32.275030 | orchestrator | 2025-04-17 01:55:32 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:32.275070 | orchestrator | 2025-04-17 01:55:32 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:35.329679 | orchestrator | 2025-04-17 01:55:35 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:35.333579 | orchestrator | 2025-04-17 01:55:35 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:35.335313 | orchestrator | 2025-04-17 01:55:35 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:38.392158 | orchestrator | 2025-04-17 01:55:35 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:38.392306 | orchestrator | 2025-04-17 01:55:38 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:38.396933 | orchestrator | 2025-04-17 01:55:38 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:38.400499 | orchestrator | 2025-04-17 01:55:38 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:41.453121 | orchestrator | 2025-04-17 01:55:38 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:41.453271 | orchestrator | 2025-04-17 01:55:41 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:41.455411 | orchestrator | 2025-04-17 01:55:41 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:44.514466 | orchestrator | 2025-04-17 01:55:41 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:44.514680 | orchestrator | 2025-04-17 01:55:41 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:44.514735 | orchestrator | 2025-04-17 01:55:44 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:44.515793 | orchestrator | 2025-04-17 01:55:44 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:44.517116 | orchestrator | 2025-04-17 01:55:44 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:47.571063 | orchestrator | 2025-04-17 01:55:44 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:47.571189 | orchestrator | 2025-04-17 01:55:47 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:47.572709 | orchestrator | 2025-04-17 01:55:47 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:47.572766 | orchestrator | 2025-04-17 01:55:47 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:50.614320 | orchestrator | 2025-04-17 01:55:47 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:50.614452 | orchestrator | 2025-04-17 01:55:50 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:50.614810 | orchestrator | 2025-04-17 01:55:50 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:50.616635 | orchestrator | 2025-04-17 01:55:50 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:53.666867 | orchestrator | 2025-04-17 01:55:50 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:53.667532 | orchestrator | 2025-04-17 01:55:53 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:53.668024 | orchestrator | 2025-04-17 01:55:53 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:53.670189 | orchestrator | 2025-04-17 01:55:53 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:56.713728 | orchestrator | 2025-04-17 01:55:53 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:56.713867 | orchestrator | 2025-04-17 01:55:56 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:56.714673 | orchestrator | 2025-04-17 01:55:56 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:56.715545 | orchestrator | 2025-04-17 01:55:56 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:55:59.771734 | orchestrator | 2025-04-17 01:55:56 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:55:59.771882 | orchestrator | 2025-04-17 01:55:59 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:55:59.772912 | orchestrator | 2025-04-17 01:55:59 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:55:59.774345 | orchestrator | 2025-04-17 01:55:59 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:02.829410 | orchestrator | 2025-04-17 01:55:59 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:02.829560 | orchestrator | 2025-04-17 01:56:02 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:02.830618 | orchestrator | 2025-04-17 01:56:02 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:02.832448 | orchestrator | 2025-04-17 01:56:02 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:02.832703 | orchestrator | 2025-04-17 01:56:02 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:05.889427 | orchestrator | 2025-04-17 01:56:05 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:05.889772 | orchestrator | 2025-04-17 01:56:05 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:05.890573 | orchestrator | 2025-04-17 01:56:05 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:05.890653 | orchestrator | 2025-04-17 01:56:05 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:08.942337 | orchestrator | 2025-04-17 01:56:08 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:08.943352 | orchestrator | 2025-04-17 01:56:08 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:08.945485 | orchestrator | 2025-04-17 01:56:08 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:11.991099 | orchestrator | 2025-04-17 01:56:08 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:11.991251 | orchestrator | 2025-04-17 01:56:11 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:11.992308 | orchestrator | 2025-04-17 01:56:11 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:11.994173 | orchestrator | 2025-04-17 01:56:11 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:15.044621 | orchestrator | 2025-04-17 01:56:11 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:15.044719 | orchestrator | 2025-04-17 01:56:15 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:15.047141 | orchestrator | 2025-04-17 01:56:15 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:15.049402 | orchestrator | 2025-04-17 01:56:15 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:15.051141 | orchestrator | 2025-04-17 01:56:15 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:18.094892 | orchestrator | 2025-04-17 01:56:18 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:18.096301 | orchestrator | 2025-04-17 01:56:18 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:18.098180 | orchestrator | 2025-04-17 01:56:18 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:18.098522 | orchestrator | 2025-04-17 01:56:18 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:21.145298 | orchestrator | 2025-04-17 01:56:21 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:21.146669 | orchestrator | 2025-04-17 01:56:21 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:21.147526 | orchestrator | 2025-04-17 01:56:21 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:21.147648 | orchestrator | 2025-04-17 01:56:21 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:24.192372 | orchestrator | 2025-04-17 01:56:24 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:24.192730 | orchestrator | 2025-04-17 01:56:24 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:24.193501 | orchestrator | 2025-04-17 01:56:24 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:24.193802 | orchestrator | 2025-04-17 01:56:24 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:27.258167 | orchestrator | 2025-04-17 01:56:27 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:27.259290 | orchestrator | 2025-04-17 01:56:27 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:27.261223 | orchestrator | 2025-04-17 01:56:27 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:27.261808 | orchestrator | 2025-04-17 01:56:27 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:30.308142 | orchestrator | 2025-04-17 01:56:30 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:30.309375 | orchestrator | 2025-04-17 01:56:30 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:30.310957 | orchestrator | 2025-04-17 01:56:30 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:33.349462 | orchestrator | 2025-04-17 01:56:30 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:33.349671 | orchestrator | 2025-04-17 01:56:33 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:33.350451 | orchestrator | 2025-04-17 01:56:33 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:33.350969 | orchestrator | 2025-04-17 01:56:33 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:36.401657 | orchestrator | 2025-04-17 01:56:33 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:36.401785 | orchestrator | 2025-04-17 01:56:36 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state STARTED 2025-04-17 01:56:36.403427 | orchestrator | 2025-04-17 01:56:36 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:36.405412 | orchestrator | 2025-04-17 01:56:36 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:36.405767 | orchestrator | 2025-04-17 01:56:36 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:39.458933 | orchestrator | 2025-04-17 01:56:39 | INFO  | Task f18b46f3-1ede-402c-be17-6b8a3a0b04b7 is in state SUCCESS 2025-04-17 01:56:39.461394 | orchestrator | 2025-04-17 01:56:39.461454 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-17 01:56:39.461470 | orchestrator | 2025-04-17 01:56:39.461485 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-04-17 01:56:39.461499 | orchestrator | 2025-04-17 01:56:39.461513 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-17 01:56:39.461527 | orchestrator | Thursday 17 April 2025 01:44:34 +0000 (0:00:01.417) 0:00:01.417 ******** 2025-04-17 01:56:39.461569 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.461586 | orchestrator | 2025-04-17 01:56:39.461599 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-17 01:56:39.461613 | orchestrator | Thursday 17 April 2025 01:44:35 +0000 (0:00:01.067) 0:00:02.484 ******** 2025-04-17 01:56:39.461663 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:39.461678 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-17 01:56:39.461691 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-17 01:56:39.461705 | orchestrator | 2025-04-17 01:56:39.461719 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-17 01:56:39.461794 | orchestrator | Thursday 17 April 2025 01:44:36 +0000 (0:00:00.530) 0:00:03.015 ******** 2025-04-17 01:56:39.461841 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.461858 | orchestrator | 2025-04-17 01:56:39.461873 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-17 01:56:39.461889 | orchestrator | Thursday 17 April 2025 01:44:37 +0000 (0:00:01.327) 0:00:04.343 ******** 2025-04-17 01:56:39.461905 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.461921 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.461964 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.461978 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.461992 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.462006 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.462080 | orchestrator | 2025-04-17 01:56:39.462098 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-17 01:56:39.462112 | orchestrator | Thursday 17 April 2025 01:44:38 +0000 (0:00:01.305) 0:00:05.648 ******** 2025-04-17 01:56:39.462126 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.462140 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.462153 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.462167 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.462181 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.462194 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.462208 | orchestrator | 2025-04-17 01:56:39.462222 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-17 01:56:39.462236 | orchestrator | Thursday 17 April 2025 01:44:39 +0000 (0:00:00.743) 0:00:06.391 ******** 2025-04-17 01:56:39.462250 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.462264 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.462278 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.462292 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.462328 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.462342 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.462356 | orchestrator | 2025-04-17 01:56:39.462370 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-17 01:56:39.462412 | orchestrator | Thursday 17 April 2025 01:44:40 +0000 (0:00:01.070) 0:00:07.462 ******** 2025-04-17 01:56:39.462427 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.462440 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.462455 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.462469 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.462483 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.462497 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.462517 | orchestrator | 2025-04-17 01:56:39.462531 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-17 01:56:39.462577 | orchestrator | Thursday 17 April 2025 01:44:41 +0000 (0:00:01.019) 0:00:08.481 ******** 2025-04-17 01:56:39.462592 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.462606 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.462619 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.462633 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.462646 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.462660 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.462674 | orchestrator | 2025-04-17 01:56:39.462688 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-17 01:56:39.462701 | orchestrator | Thursday 17 April 2025 01:44:42 +0000 (0:00:00.763) 0:00:09.245 ******** 2025-04-17 01:56:39.462715 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.462729 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.462742 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.462756 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.462770 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.462783 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.462797 | orchestrator | 2025-04-17 01:56:39.462811 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-17 01:56:39.462824 | orchestrator | Thursday 17 April 2025 01:44:43 +0000 (0:00:01.480) 0:00:10.725 ******** 2025-04-17 01:56:39.462838 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.462853 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.462867 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.462881 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.462894 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.462908 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.462921 | orchestrator | 2025-04-17 01:56:39.462935 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-17 01:56:39.462949 | orchestrator | Thursday 17 April 2025 01:44:44 +0000 (0:00:00.871) 0:00:11.597 ******** 2025-04-17 01:56:39.462962 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.462976 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.462989 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.463003 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.463016 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.463232 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.463246 | orchestrator | 2025-04-17 01:56:39.463272 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-17 01:56:39.463286 | orchestrator | Thursday 17 April 2025 01:44:45 +0000 (0:00:00.947) 0:00:12.546 ******** 2025-04-17 01:56:39.463301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:39.463315 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:56:39.463329 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:56:39.463343 | orchestrator | 2025-04-17 01:56:39.463357 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-17 01:56:39.463370 | orchestrator | Thursday 17 April 2025 01:44:46 +0000 (0:00:00.950) 0:00:13.497 ******** 2025-04-17 01:56:39.463384 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.463398 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.463412 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.463425 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.463451 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.463465 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.463479 | orchestrator | 2025-04-17 01:56:39.463501 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-17 01:56:39.463524 | orchestrator | Thursday 17 April 2025 01:44:48 +0000 (0:00:01.564) 0:00:15.061 ******** 2025-04-17 01:56:39.463614 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:39.463668 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:56:39.463711 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:56:39.463737 | orchestrator | 2025-04-17 01:56:39.463751 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-17 01:56:39.463765 | orchestrator | Thursday 17 April 2025 01:44:51 +0000 (0:00:02.860) 0:00:17.922 ******** 2025-04-17 01:56:39.463779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.463792 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.463806 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.463820 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.463834 | orchestrator | 2025-04-17 01:56:39.463848 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-17 01:56:39.463861 | orchestrator | Thursday 17 April 2025 01:44:51 +0000 (0:00:00.428) 0:00:18.351 ******** 2025-04-17 01:56:39.463876 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.463892 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.463907 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.463921 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.463933 | orchestrator | 2025-04-17 01:56:39.463946 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-17 01:56:39.463966 | orchestrator | Thursday 17 April 2025 01:44:52 +0000 (0:00:00.694) 0:00:19.045 ******** 2025-04-17 01:56:39.463980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.463994 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.464007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.464019 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.464041 | orchestrator | 2025-04-17 01:56:39.464053 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-17 01:56:39.464220 | orchestrator | Thursday 17 April 2025 01:44:52 +0000 (0:00:00.178) 0:00:19.223 ******** 2025-04-17 01:56:39.464250 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-17 01:44:48.930739', 'end': '2025-04-17 01:44:49.204672', 'delta': '0:00:00.273933', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.464281 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-17 01:44:49.772459', 'end': '2025-04-17 01:44:50.068507', 'delta': '0:00:00.296048', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.464305 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-17 01:44:50.664225', 'end': '2025-04-17 01:44:50.950423', 'delta': '0:00:00.286198', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-17 01:56:39.464327 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.464344 | orchestrator | 2025-04-17 01:56:39.464357 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-17 01:56:39.464369 | orchestrator | Thursday 17 April 2025 01:44:52 +0000 (0:00:00.225) 0:00:19.448 ******** 2025-04-17 01:56:39.464381 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.464393 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.464405 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.464417 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.464429 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.464441 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.464453 | orchestrator | 2025-04-17 01:56:39.464465 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-17 01:56:39.464477 | orchestrator | Thursday 17 April 2025 01:44:54 +0000 (0:00:01.445) 0:00:20.893 ******** 2025-04-17 01:56:39.464489 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.464501 | orchestrator | 2025-04-17 01:56:39.464513 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-17 01:56:39.464525 | orchestrator | Thursday 17 April 2025 01:44:54 +0000 (0:00:00.654) 0:00:21.548 ******** 2025-04-17 01:56:39.464555 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.464568 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.464581 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.464592 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.464605 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.464617 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.464639 | orchestrator | 2025-04-17 01:56:39.464652 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-17 01:56:39.464664 | orchestrator | Thursday 17 April 2025 01:44:55 +0000 (0:00:00.727) 0:00:22.276 ******** 2025-04-17 01:56:39.464676 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.464688 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.464715 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.464728 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.464747 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.464759 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.464772 | orchestrator | 2025-04-17 01:56:39.464784 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-17 01:56:39.464796 | orchestrator | Thursday 17 April 2025 01:44:56 +0000 (0:00:01.058) 0:00:23.334 ******** 2025-04-17 01:56:39.464809 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.464821 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.464833 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.464845 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.464857 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.464869 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.464881 | orchestrator | 2025-04-17 01:56:39.464894 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-17 01:56:39.464906 | orchestrator | Thursday 17 April 2025 01:44:57 +0000 (0:00:00.562) 0:00:23.896 ******** 2025-04-17 01:56:39.464925 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.464938 | orchestrator | 2025-04-17 01:56:39.464951 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-17 01:56:39.464963 | orchestrator | Thursday 17 April 2025 01:44:57 +0000 (0:00:00.128) 0:00:24.025 ******** 2025-04-17 01:56:39.464975 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.464987 | orchestrator | 2025-04-17 01:56:39.464999 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-17 01:56:39.465012 | orchestrator | Thursday 17 April 2025 01:44:57 +0000 (0:00:00.672) 0:00:24.697 ******** 2025-04-17 01:56:39.465024 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.465036 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.465048 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.465060 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.465072 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.465084 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.465096 | orchestrator | 2025-04-17 01:56:39.465108 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-17 01:56:39.465120 | orchestrator | Thursday 17 April 2025 01:44:58 +0000 (0:00:00.634) 0:00:25.331 ******** 2025-04-17 01:56:39.465133 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.465145 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.465157 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.465169 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.465181 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.465193 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.465205 | orchestrator | 2025-04-17 01:56:39.465217 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-17 01:56:39.465229 | orchestrator | Thursday 17 April 2025 01:44:59 +0000 (0:00:00.851) 0:00:26.183 ******** 2025-04-17 01:56:39.465242 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.465253 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.465387 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.465401 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.465413 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.465425 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.465438 | orchestrator | 2025-04-17 01:56:39.465450 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-17 01:56:39.465463 | orchestrator | Thursday 17 April 2025 01:45:00 +0000 (0:00:00.874) 0:00:27.058 ******** 2025-04-17 01:56:39.465483 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.465496 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.465508 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.465520 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.465532 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.465659 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.465684 | orchestrator | 2025-04-17 01:56:39.465705 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-17 01:56:39.465722 | orchestrator | Thursday 17 April 2025 01:45:01 +0000 (0:00:01.060) 0:00:28.118 ******** 2025-04-17 01:56:39.465734 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.465746 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.465758 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.465770 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.465782 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.465794 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.465806 | orchestrator | 2025-04-17 01:56:39.465818 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-17 01:56:39.465830 | orchestrator | Thursday 17 April 2025 01:45:02 +0000 (0:00:00.740) 0:00:28.859 ******** 2025-04-17 01:56:39.465842 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.465855 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.465867 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.465879 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.465891 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.465903 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.465915 | orchestrator | 2025-04-17 01:56:39.465927 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-17 01:56:39.465940 | orchestrator | Thursday 17 April 2025 01:45:03 +0000 (0:00:01.146) 0:00:30.005 ******** 2025-04-17 01:56:39.465952 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.465964 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.465977 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.465989 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.466001 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.466014 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.466102 | orchestrator | 2025-04-17 01:56:39.466131 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-17 01:56:39.466155 | orchestrator | Thursday 17 April 2025 01:45:03 +0000 (0:00:00.695) 0:00:30.701 ******** 2025-04-17 01:56:39.466176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e4fe9eb-5e43-4aa2-9b37-d2398fe01f7b', 'scsi-SQEMU_QEMU_HARDDISK_6e4fe9eb-5e43-4aa2-9b37-d2398fe01f7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29eb77c3-a4eb-47de-bcfc-90cea0292ee8', 'scsi-SQEMU_QEMU_HARDDISK_29eb77c3-a4eb-47de-bcfc-90cea0292ee8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d7eac16-cb9a-452c-8088-f21cbc7102b1', 'scsi-SQEMU_QEMU_HARDDISK_7d7eac16-cb9a-452c-8088-f21cbc7102b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9', 'scsi-SQEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part1', 'scsi-SQEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part14', 'scsi-SQEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part15', 'scsi-SQEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part16', 'scsi-SQEMU_QEMU_HARDDISK_711aec66-68ef-4548-aa3b-8fad97a96fa9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0102145-e326-42a8-9189-9b289697f2f1', 'scsi-SQEMU_QEMU_HARDDISK_d0102145-e326-42a8-9189-9b289697f2f1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70c2d06b-89ef-4a1b-882c-e0d752f0d1e2', 'scsi-SQEMU_QEMU_HARDDISK_70c2d06b-89ef-4a1b-882c-e0d752f0d1e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_25683219-5c0a-4b96-92c9-99d674025eb1', 'scsi-SQEMU_QEMU_HARDDISK_25683219-5c0a-4b96-92c9-99d674025eb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466718 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.466732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466827 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.466840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589', 'scsi-SQEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part1', 'scsi-SQEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part14', 'scsi-SQEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part15', 'scsi-SQEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part16', 'scsi-SQEMU_QEMU_HARDDISK_5bc17d26-642e-48ef-be74-33669ffc4589-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19544825-ba43-4bb1-8c25-64db59cc98e2', 'scsi-SQEMU_QEMU_HARDDISK_19544825-ba43-4bb1-8c25-64db59cc98e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2310849-ed6c-49c2-b9e0-f9c06c6339c9', 'scsi-SQEMU_QEMU_HARDDISK_d2310849-ed6c-49c2-b9e0-f9c06c6339c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cc5c4f7-7927-43eb-bfd2-3f01b9eb04d9', 'scsi-SQEMU_QEMU_HARDDISK_2cc5c4f7-7927-43eb-bfd2-3f01b9eb04d9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.466945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--567181ad--d304--5248--b248--9710ecf6a56a-osd--block--567181ad--d304--5248--b248--9710ecf6a56a', 'dm-uuid-LVM-bYe2GR47CfdRAuGUgOfMJCJDLRAXMyAJ5b9vnqrLZL2VXm8ZnPhXXCnNOwWB1dXc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e-osd--block--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e', 'dm-uuid-LVM-i3z8oLrfZebl406dTMAr1ZlExlhhAWvWdVpDizjp8HwCqxskpQwu46wNtWrRFIVT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.466991 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.467009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ebc25b0--9278--5fc8--8be4--afb201f0a343-osd--block--7ebc25b0--9278--5fc8--8be4--afb201f0a343', 'dm-uuid-LVM-UzeZzPzorXp8KV3DW3WIidSfgxphPTp03MVWvQM3d7Kpc9aah093ulKgJtTf1OuG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b69f2859--f86c--57c9--a956--28222694e166-osd--block--b69f2859--f86c--57c9--a956--28222694e166', 'dm-uuid-LVM-ruRqaKQFK07FwWdyfnJTHETcjjDvSVQQYZx1CjjAL9oVE1uSAQer6T9LEEzxFBKW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--567181ad--d304--5248--b248--9710ecf6a56a-osd--block--567181ad--d304--5248--b248--9710ecf6a56a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2JyHJ1-wMiA-Ed3U-WaLw-D2q0-v5tm-57x2LE', 'scsi-0QEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0', 'scsi-SQEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e-osd--block--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rc3fAq-eYxC-mU36-Y9MG-CY16-CbYv-DNVptp', 'scsi-0QEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb', 'scsi-SQEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242', 'scsi-SQEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467320 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.467331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ebc25b0--9278--5fc8--8be4--afb201f0a343-osd--block--7ebc25b0--9278--5fc8--8be4--afb201f0a343'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-h5ki4h-UqrN-BD4C-TJfA-l3w0-eYmg-VdJYdZ', 'scsi-0QEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908', 'scsi-SQEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b69f2859--f86c--57c9--a956--28222694e166-osd--block--b69f2859--f86c--57c9--a956--28222694e166'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MTQK23-nB2k-fmqi-vyOH-BZsi-PrZU-1UUzbJ', 'scsi-0QEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89', 'scsi-SQEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb', 'scsi-SQEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467385 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.467395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c-osd--block--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c', 'dm-uuid-LVM-hN8j80rAYArPOQzJtmZTfMCcfU0wqlndR6bBlKMNPZwYURJvSpmTzjUWOUUOBu34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af980f31--aa48--52cf--851d--a23b8b791ab9-osd--block--af980f31--aa48--52cf--851d--a23b8b791ab9', 'dm-uuid-LVM-2y1cF3yEf7AALsIhQz3m8JX59uQhbdUdsZdK4rydxBeHaLxZvXVM591c9REdgBjE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:56:39.467517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.467553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c-osd--block--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cKWHeS-W6Sl-A21i-85J5-4nrh-yTq8-iUzTQb', 'scsi-0QEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2', 'scsi-SQEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.468416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--af980f31--aa48--52cf--851d--a23b8b791ab9-osd--block--af980f31--aa48--52cf--851d--a23b8b791ab9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yig9HS-7KiD-1N40-fu03-cWxu-V2Qc-WlWhcg', 'scsi-0QEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650', 'scsi-SQEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.468445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700', 'scsi-SQEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.468456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:56:39.468483 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.468494 | orchestrator | 2025-04-17 01:56:39.468504 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-17 01:56:39.468515 | orchestrator | Thursday 17 April 2025 01:45:05 +0000 (0:00:01.908) 0:00:32.609 ******** 2025-04-17 01:56:39.468525 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.468535 | orchestrator | 2025-04-17 01:56:39.468563 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-17 01:56:39.468573 | orchestrator | Thursday 17 April 2025 01:45:06 +0000 (0:00:00.335) 0:00:32.944 ******** 2025-04-17 01:56:39.468583 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.468593 | orchestrator | 2025-04-17 01:56:39.468603 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-17 01:56:39.468613 | orchestrator | Thursday 17 April 2025 01:45:06 +0000 (0:00:00.163) 0:00:33.108 ******** 2025-04-17 01:56:39.468623 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.468633 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.468644 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.468654 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.468664 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.468674 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.468683 | orchestrator | 2025-04-17 01:56:39.468694 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-17 01:56:39.468704 | orchestrator | Thursday 17 April 2025 01:45:07 +0000 (0:00:00.772) 0:00:33.880 ******** 2025-04-17 01:56:39.468713 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.468723 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.468733 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.468743 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.468753 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.468763 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.468772 | orchestrator | 2025-04-17 01:56:39.468782 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-17 01:56:39.468792 | orchestrator | Thursday 17 April 2025 01:45:08 +0000 (0:00:01.444) 0:00:35.325 ******** 2025-04-17 01:56:39.468802 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.468812 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.468822 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.468832 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.468841 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.468851 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.468861 | orchestrator | 2025-04-17 01:56:39.468871 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-17 01:56:39.468881 | orchestrator | Thursday 17 April 2025 01:45:09 +0000 (0:00:00.965) 0:00:36.291 ******** 2025-04-17 01:56:39.468891 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.468901 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.468911 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.468921 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.468931 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.469011 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.469026 | orchestrator | 2025-04-17 01:56:39.469037 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-17 01:56:39.469047 | orchestrator | Thursday 17 April 2025 01:45:10 +0000 (0:00:01.049) 0:00:37.340 ******** 2025-04-17 01:56:39.469057 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.469067 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.469076 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.469086 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.469096 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.469113 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.469123 | orchestrator | 2025-04-17 01:56:39.469133 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-17 01:56:39.469143 | orchestrator | Thursday 17 April 2025 01:45:11 +0000 (0:00:00.841) 0:00:38.182 ******** 2025-04-17 01:56:39.469153 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.469163 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.469173 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.469183 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.469193 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.469202 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.469212 | orchestrator | 2025-04-17 01:56:39.469222 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-17 01:56:39.469232 | orchestrator | Thursday 17 April 2025 01:45:12 +0000 (0:00:01.141) 0:00:39.324 ******** 2025-04-17 01:56:39.469242 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.469252 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.469262 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.469272 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.469282 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.469292 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.469307 | orchestrator | 2025-04-17 01:56:39.469318 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-17 01:56:39.469328 | orchestrator | Thursday 17 April 2025 01:45:13 +0000 (0:00:00.708) 0:00:40.032 ******** 2025-04-17 01:56:39.469355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.469366 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-17 01:56:39.469376 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-17 01:56:39.469386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.469396 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-17 01:56:39.469406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-17 01:56:39.469416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.469426 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.469436 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-17 01:56:39.469450 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-17 01:56:39.469460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:56:39.469470 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.469480 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.469490 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:56:39.469500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:56:39.469510 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:56:39.469520 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:56:39.469530 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:56:39.469589 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:56:39.469600 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.469610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:56:39.469620 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.469631 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:56:39.469643 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.469654 | orchestrator | 2025-04-17 01:56:39.469665 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-17 01:56:39.469676 | orchestrator | Thursday 17 April 2025 01:45:15 +0000 (0:00:02.626) 0:00:42.659 ******** 2025-04-17 01:56:39.469688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.469699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.469717 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-17 01:56:39.469728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-17 01:56:39.469739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.469750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-17 01:56:39.469761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-17 01:56:39.469772 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.469783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:56:39.469794 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-17 01:56:39.469805 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-17 01:56:39.469816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:56:39.469827 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.469839 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.469850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:56:39.469861 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.469872 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:56:39.469883 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:56:39.469894 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:56:39.469905 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:56:39.469984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:56:39.470001 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.470012 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:56:39.470047 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.470057 | orchestrator | 2025-04-17 01:56:39.470067 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-17 01:56:39.470077 | orchestrator | Thursday 17 April 2025 01:45:17 +0000 (0:00:01.863) 0:00:44.522 ******** 2025-04-17 01:56:39.470087 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:39.470097 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-17 01:56:39.470108 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-04-17 01:56:39.470116 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-04-17 01:56:39.470125 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-17 01:56:39.470133 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-17 01:56:39.470141 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-17 01:56:39.470150 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-17 01:56:39.470158 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-04-17 01:56:39.470166 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-17 01:56:39.470175 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-04-17 01:56:39.470183 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-17 01:56:39.470191 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-17 01:56:39.470200 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-17 01:56:39.470208 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-04-17 01:56:39.470216 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-04-17 01:56:39.470225 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-17 01:56:39.470233 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-17 01:56:39.470241 | orchestrator | 2025-04-17 01:56:39.470250 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-17 01:56:39.470258 | orchestrator | Thursday 17 April 2025 01:45:21 +0000 (0:00:04.150) 0:00:48.672 ******** 2025-04-17 01:56:39.470267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.470275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.470289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.470298 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-17 01:56:39.470306 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-17 01:56:39.470314 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.470323 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-17 01:56:39.470331 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-17 01:56:39.470340 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-17 01:56:39.470348 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-17 01:56:39.470356 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.470373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:56:39.470381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:56:39.470390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:56:39.470398 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.470407 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.470415 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:56:39.470423 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:56:39.470431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:56:39.470440 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.470448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:56:39.470457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:56:39.470465 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:56:39.470473 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.470482 | orchestrator | 2025-04-17 01:56:39.470490 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-17 01:56:39.470498 | orchestrator | Thursday 17 April 2025 01:45:22 +0000 (0:00:01.085) 0:00:49.758 ******** 2025-04-17 01:56:39.470507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.470515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.470523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.470532 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.470554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-17 01:56:39.470562 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-17 01:56:39.470571 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-17 01:56:39.470579 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.470588 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-17 01:56:39.470596 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-17 01:56:39.470604 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-17 01:56:39.470613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:56:39.470621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:56:39.470629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:56:39.470638 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.470646 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:56:39.470655 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.470717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:56:39.470730 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:56:39.470739 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.470747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:56:39.470756 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:56:39.470770 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:56:39.470779 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.470787 | orchestrator | 2025-04-17 01:56:39.470795 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-17 01:56:39.470808 | orchestrator | Thursday 17 April 2025 01:45:23 +0000 (0:00:00.878) 0:00:50.636 ******** 2025-04-17 01:56:39.470829 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-17 01:56:39.470844 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:56:39.470860 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:56:39.470875 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:56:39.470889 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-04-17 01:56:39.470905 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:56:39.470921 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:56:39.470937 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:56:39.470952 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:56:39.470966 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:56:39.470975 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-04-17 01:56:39.470984 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:56:39.470993 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:56:39.471002 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:56:39.471010 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:56:39.471019 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.471027 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471040 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:56:39.471049 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:56:39.471057 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:56:39.471066 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.471074 | orchestrator | 2025-04-17 01:56:39.471083 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-17 01:56:39.471091 | orchestrator | Thursday 17 April 2025 01:45:24 +0000 (0:00:00.802) 0:00:51.439 ******** 2025-04-17 01:56:39.471100 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.471108 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.471117 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.471125 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.471134 | orchestrator | 2025-04-17 01:56:39.471142 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.471151 | orchestrator | Thursday 17 April 2025 01:45:25 +0000 (0:00:00.973) 0:00:52.413 ******** 2025-04-17 01:56:39.471160 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471168 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.471177 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.471185 | orchestrator | 2025-04-17 01:56:39.471215 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.471224 | orchestrator | Thursday 17 April 2025 01:45:26 +0000 (0:00:00.466) 0:00:52.879 ******** 2025-04-17 01:56:39.471232 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471241 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.471249 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.471257 | orchestrator | 2025-04-17 01:56:39.471266 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.471274 | orchestrator | Thursday 17 April 2025 01:45:26 +0000 (0:00:00.830) 0:00:53.710 ******** 2025-04-17 01:56:39.471283 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471291 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.471300 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.471308 | orchestrator | 2025-04-17 01:56:39.471316 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.471324 | orchestrator | Thursday 17 April 2025 01:45:27 +0000 (0:00:00.553) 0:00:54.264 ******** 2025-04-17 01:56:39.471333 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.471342 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.471352 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.471362 | orchestrator | 2025-04-17 01:56:39.471372 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.471459 | orchestrator | Thursday 17 April 2025 01:45:28 +0000 (0:00:00.983) 0:00:55.248 ******** 2025-04-17 01:56:39.471482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.471497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.471508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.471517 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471527 | orchestrator | 2025-04-17 01:56:39.471550 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.471560 | orchestrator | Thursday 17 April 2025 01:45:29 +0000 (0:00:00.584) 0:00:55.832 ******** 2025-04-17 01:56:39.471570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.471580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.471589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.471598 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471606 | orchestrator | 2025-04-17 01:56:39.471615 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.471623 | orchestrator | Thursday 17 April 2025 01:45:29 +0000 (0:00:00.614) 0:00:56.447 ******** 2025-04-17 01:56:39.471631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.471640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.471648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.471657 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471671 | orchestrator | 2025-04-17 01:56:39.471680 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.471688 | orchestrator | Thursday 17 April 2025 01:45:30 +0000 (0:00:01.112) 0:00:57.560 ******** 2025-04-17 01:56:39.471697 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.471705 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.471713 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.471722 | orchestrator | 2025-04-17 01:56:39.471730 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.471739 | orchestrator | Thursday 17 April 2025 01:45:31 +0000 (0:00:00.650) 0:00:58.210 ******** 2025-04-17 01:56:39.471747 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-17 01:56:39.471756 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-17 01:56:39.471764 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-17 01:56:39.471777 | orchestrator | 2025-04-17 01:56:39.471786 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.471800 | orchestrator | Thursday 17 April 2025 01:45:32 +0000 (0:00:01.116) 0:00:59.327 ******** 2025-04-17 01:56:39.471809 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471822 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.471831 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.471839 | orchestrator | 2025-04-17 01:56:39.471848 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.471856 | orchestrator | Thursday 17 April 2025 01:45:33 +0000 (0:00:00.668) 0:00:59.995 ******** 2025-04-17 01:56:39.471865 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471873 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.471881 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.471890 | orchestrator | 2025-04-17 01:56:39.471898 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.471907 | orchestrator | Thursday 17 April 2025 01:45:33 +0000 (0:00:00.738) 0:01:00.734 ******** 2025-04-17 01:56:39.471915 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.471923 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.471932 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.471940 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.471949 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.471957 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.471966 | orchestrator | 2025-04-17 01:56:39.471974 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.471983 | orchestrator | Thursday 17 April 2025 01:45:34 +0000 (0:00:00.915) 0:01:01.649 ******** 2025-04-17 01:56:39.471991 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.472000 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.472009 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.472017 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.472026 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.472035 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.472043 | orchestrator | 2025-04-17 01:56:39.472051 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.472060 | orchestrator | Thursday 17 April 2025 01:45:35 +0000 (0:00:00.659) 0:01:02.308 ******** 2025-04-17 01:56:39.472068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.472080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.472089 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:56:39.472097 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:56:39.472106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.472114 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.472122 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:56:39.472131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:56:39.472139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:56:39.472148 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.472214 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:56:39.472227 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.472236 | orchestrator | 2025-04-17 01:56:39.472244 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-17 01:56:39.472253 | orchestrator | Thursday 17 April 2025 01:45:36 +0000 (0:00:00.524) 0:01:02.833 ******** 2025-04-17 01:56:39.472262 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.472270 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.472284 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.472300 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.472316 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.472331 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.472347 | orchestrator | 2025-04-17 01:56:39.472363 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-17 01:56:39.472379 | orchestrator | Thursday 17 April 2025 01:45:36 +0000 (0:00:00.621) 0:01:03.455 ******** 2025-04-17 01:56:39.472393 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:39.472403 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:56:39.472411 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:56:39.472420 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-17 01:56:39.472428 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-17 01:56:39.472437 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-17 01:56:39.472445 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-17 01:56:39.472453 | orchestrator | 2025-04-17 01:56:39.472462 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-17 01:56:39.472470 | orchestrator | Thursday 17 April 2025 01:45:37 +0000 (0:00:01.070) 0:01:04.525 ******** 2025-04-17 01:56:39.472479 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:39.472488 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:56:39.472496 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:56:39.472504 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-17 01:56:39.472513 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-17 01:56:39.472521 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-17 01:56:39.472529 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-17 01:56:39.472578 | orchestrator | 2025-04-17 01:56:39.472590 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-17 01:56:39.472599 | orchestrator | Thursday 17 April 2025 01:45:39 +0000 (0:00:01.560) 0:01:06.086 ******** 2025-04-17 01:56:39.472609 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.472620 | orchestrator | 2025-04-17 01:56:39.472645 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-17 01:56:39.472655 | orchestrator | Thursday 17 April 2025 01:45:40 +0000 (0:00:01.208) 0:01:07.294 ******** 2025-04-17 01:56:39.472663 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.472671 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.472680 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.472688 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.472696 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.472705 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.472713 | orchestrator | 2025-04-17 01:56:39.472722 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-17 01:56:39.472730 | orchestrator | Thursday 17 April 2025 01:45:41 +0000 (0:00:00.616) 0:01:07.911 ******** 2025-04-17 01:56:39.472739 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.472747 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.472755 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.472764 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.472772 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.472788 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.472796 | orchestrator | 2025-04-17 01:56:39.472805 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-17 01:56:39.472813 | orchestrator | Thursday 17 April 2025 01:45:42 +0000 (0:00:01.153) 0:01:09.064 ******** 2025-04-17 01:56:39.472821 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.472830 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.472838 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.472848 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.472857 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.472867 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.472876 | orchestrator | 2025-04-17 01:56:39.472885 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-17 01:56:39.472895 | orchestrator | Thursday 17 April 2025 01:45:43 +0000 (0:00:01.427) 0:01:10.492 ******** 2025-04-17 01:56:39.472904 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.472913 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.472922 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.472932 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.472941 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.472951 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.472961 | orchestrator | 2025-04-17 01:56:39.472970 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-17 01:56:39.472980 | orchestrator | Thursday 17 April 2025 01:45:44 +0000 (0:00:01.214) 0:01:11.707 ******** 2025-04-17 01:56:39.472989 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.473001 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.473095 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473111 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473121 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473131 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.473146 | orchestrator | 2025-04-17 01:56:39.473156 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-17 01:56:39.473165 | orchestrator | Thursday 17 April 2025 01:45:45 +0000 (0:00:01.012) 0:01:12.719 ******** 2025-04-17 01:56:39.473174 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473183 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473191 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473200 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473209 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473217 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473226 | orchestrator | 2025-04-17 01:56:39.473239 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-17 01:56:39.473248 | orchestrator | Thursday 17 April 2025 01:45:46 +0000 (0:00:00.725) 0:01:13.444 ******** 2025-04-17 01:56:39.473256 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473265 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473274 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473283 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473292 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473299 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473307 | orchestrator | 2025-04-17 01:56:39.473315 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-17 01:56:39.473323 | orchestrator | Thursday 17 April 2025 01:45:47 +0000 (0:00:00.817) 0:01:14.261 ******** 2025-04-17 01:56:39.473331 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473339 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473346 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473354 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473362 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473370 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473378 | orchestrator | 2025-04-17 01:56:39.473385 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-17 01:56:39.473393 | orchestrator | Thursday 17 April 2025 01:45:48 +0000 (0:00:00.699) 0:01:14.961 ******** 2025-04-17 01:56:39.473406 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473414 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473422 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473430 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473438 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473445 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473453 | orchestrator | 2025-04-17 01:56:39.473461 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-17 01:56:39.473469 | orchestrator | Thursday 17 April 2025 01:45:49 +0000 (0:00:00.896) 0:01:15.858 ******** 2025-04-17 01:56:39.473477 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473484 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473492 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473500 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473508 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473515 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473523 | orchestrator | 2025-04-17 01:56:39.473531 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-17 01:56:39.473552 | orchestrator | Thursday 17 April 2025 01:45:49 +0000 (0:00:00.801) 0:01:16.659 ******** 2025-04-17 01:56:39.473560 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.473568 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.473576 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.473584 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.473592 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.473600 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.473607 | orchestrator | 2025-04-17 01:56:39.473615 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-17 01:56:39.473623 | orchestrator | Thursday 17 April 2025 01:45:51 +0000 (0:00:01.186) 0:01:17.846 ******** 2025-04-17 01:56:39.473631 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473639 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473647 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473655 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473662 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473670 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473678 | orchestrator | 2025-04-17 01:56:39.473686 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-17 01:56:39.473694 | orchestrator | Thursday 17 April 2025 01:45:51 +0000 (0:00:00.503) 0:01:18.350 ******** 2025-04-17 01:56:39.473701 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.473709 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.473717 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.473725 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.473733 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.473740 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.473748 | orchestrator | 2025-04-17 01:56:39.473756 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-17 01:56:39.473764 | orchestrator | Thursday 17 April 2025 01:45:52 +0000 (0:00:00.597) 0:01:18.947 ******** 2025-04-17 01:56:39.473772 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473780 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473787 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473795 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.473811 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.473820 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.473828 | orchestrator | 2025-04-17 01:56:39.473836 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-17 01:56:39.473844 | orchestrator | Thursday 17 April 2025 01:45:52 +0000 (0:00:00.452) 0:01:19.399 ******** 2025-04-17 01:56:39.473852 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473859 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473867 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.473879 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.473887 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.473895 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.473903 | orchestrator | 2025-04-17 01:56:39.473911 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-17 01:56:39.473970 | orchestrator | Thursday 17 April 2025 01:45:53 +0000 (0:00:00.673) 0:01:20.072 ******** 2025-04-17 01:56:39.473982 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.473990 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.473998 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474006 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.474031 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.474041 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.474048 | orchestrator | 2025-04-17 01:56:39.474056 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-17 01:56:39.474064 | orchestrator | Thursday 17 April 2025 01:45:53 +0000 (0:00:00.531) 0:01:20.604 ******** 2025-04-17 01:56:39.474072 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474080 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474087 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474095 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474103 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474111 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474119 | orchestrator | 2025-04-17 01:56:39.474126 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-17 01:56:39.474134 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.657) 0:01:21.262 ******** 2025-04-17 01:56:39.474142 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474150 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474157 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474165 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474173 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474181 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474188 | orchestrator | 2025-04-17 01:56:39.474196 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-17 01:56:39.474204 | orchestrator | Thursday 17 April 2025 01:45:54 +0000 (0:00:00.503) 0:01:21.765 ******** 2025-04-17 01:56:39.474211 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.474219 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.474227 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.474235 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474242 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474250 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474258 | orchestrator | 2025-04-17 01:56:39.474266 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-17 01:56:39.474273 | orchestrator | Thursday 17 April 2025 01:45:56 +0000 (0:00:01.007) 0:01:22.772 ******** 2025-04-17 01:56:39.474281 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.474289 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.474296 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.474304 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.474312 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.474319 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.474327 | orchestrator | 2025-04-17 01:56:39.474335 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.474343 | orchestrator | Thursday 17 April 2025 01:45:56 +0000 (0:00:00.959) 0:01:23.731 ******** 2025-04-17 01:56:39.474350 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474358 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474366 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474373 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474381 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474389 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474396 | orchestrator | 2025-04-17 01:56:39.474404 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.474422 | orchestrator | Thursday 17 April 2025 01:45:57 +0000 (0:00:00.870) 0:01:24.602 ******** 2025-04-17 01:56:39.474430 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474438 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474463 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474472 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474480 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474487 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474499 | orchestrator | 2025-04-17 01:56:39.474507 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.474515 | orchestrator | Thursday 17 April 2025 01:45:58 +0000 (0:00:00.583) 0:01:25.186 ******** 2025-04-17 01:56:39.474522 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474530 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474578 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474587 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474595 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474605 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474613 | orchestrator | 2025-04-17 01:56:39.474622 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.474631 | orchestrator | Thursday 17 April 2025 01:45:59 +0000 (0:00:00.816) 0:01:26.002 ******** 2025-04-17 01:56:39.474640 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474648 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474657 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474665 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474673 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474682 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474690 | orchestrator | 2025-04-17 01:56:39.474699 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.474708 | orchestrator | Thursday 17 April 2025 01:45:59 +0000 (0:00:00.588) 0:01:26.590 ******** 2025-04-17 01:56:39.474717 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474726 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474734 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474743 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474751 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474760 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474768 | orchestrator | 2025-04-17 01:56:39.474777 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.474785 | orchestrator | Thursday 17 April 2025 01:46:00 +0000 (0:00:00.873) 0:01:27.464 ******** 2025-04-17 01:56:39.474794 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474803 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474811 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474819 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474828 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.474837 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.474845 | orchestrator | 2025-04-17 01:56:39.474920 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.474940 | orchestrator | Thursday 17 April 2025 01:46:01 +0000 (0:00:00.581) 0:01:28.046 ******** 2025-04-17 01:56:39.474955 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.474968 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.474982 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.474992 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.474999 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475007 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475015 | orchestrator | 2025-04-17 01:56:39.475023 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.475032 | orchestrator | Thursday 17 April 2025 01:46:02 +0000 (0:00:00.782) 0:01:28.828 ******** 2025-04-17 01:56:39.475047 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475055 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475062 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475070 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475078 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475086 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475093 | orchestrator | 2025-04-17 01:56:39.475101 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.475109 | orchestrator | Thursday 17 April 2025 01:46:02 +0000 (0:00:00.595) 0:01:29.424 ******** 2025-04-17 01:56:39.475118 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475125 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475133 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475141 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475149 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475156 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475164 | orchestrator | 2025-04-17 01:56:39.475171 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.475178 | orchestrator | Thursday 17 April 2025 01:46:03 +0000 (0:00:00.883) 0:01:30.307 ******** 2025-04-17 01:56:39.475185 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475192 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475198 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475205 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475212 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475219 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475225 | orchestrator | 2025-04-17 01:56:39.475232 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.475240 | orchestrator | Thursday 17 April 2025 01:46:04 +0000 (0:00:00.798) 0:01:31.106 ******** 2025-04-17 01:56:39.475247 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475253 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475260 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475271 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475278 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475285 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475292 | orchestrator | 2025-04-17 01:56:39.475299 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.475306 | orchestrator | Thursday 17 April 2025 01:46:05 +0000 (0:00:01.040) 0:01:32.146 ******** 2025-04-17 01:56:39.475312 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475319 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475326 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475333 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475340 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475346 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475353 | orchestrator | 2025-04-17 01:56:39.475360 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.475368 | orchestrator | Thursday 17 April 2025 01:46:05 +0000 (0:00:00.595) 0:01:32.742 ******** 2025-04-17 01:56:39.475374 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.475382 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.475388 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475395 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.475403 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.475409 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475416 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.475423 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.475431 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475442 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.475458 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.475470 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.475482 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.475490 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475497 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475504 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.475514 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.475521 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475528 | orchestrator | 2025-04-17 01:56:39.475535 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.475556 | orchestrator | Thursday 17 April 2025 01:46:07 +0000 (0:00:01.102) 0:01:33.844 ******** 2025-04-17 01:56:39.475563 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-17 01:56:39.475570 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-17 01:56:39.475577 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475584 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-17 01:56:39.475591 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-17 01:56:39.475598 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475605 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-17 01:56:39.475612 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-17 01:56:39.475669 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475680 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-17 01:56:39.475687 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-17 01:56:39.475693 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475700 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-17 01:56:39.475707 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-17 01:56:39.475714 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475721 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-17 01:56:39.475728 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-17 01:56:39.475735 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475742 | orchestrator | 2025-04-17 01:56:39.475748 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.475755 | orchestrator | Thursday 17 April 2025 01:46:07 +0000 (0:00:00.695) 0:01:34.541 ******** 2025-04-17 01:56:39.475762 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475769 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475775 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475782 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475789 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475795 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475802 | orchestrator | 2025-04-17 01:56:39.475809 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.475816 | orchestrator | Thursday 17 April 2025 01:46:08 +0000 (0:00:00.806) 0:01:35.347 ******** 2025-04-17 01:56:39.475823 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475830 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475836 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475843 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475850 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475857 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475863 | orchestrator | 2025-04-17 01:56:39.475870 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.475878 | orchestrator | Thursday 17 April 2025 01:46:09 +0000 (0:00:00.612) 0:01:35.960 ******** 2025-04-17 01:56:39.475885 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475897 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475904 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475911 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475918 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475924 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475931 | orchestrator | 2025-04-17 01:56:39.475938 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.475945 | orchestrator | Thursday 17 April 2025 01:46:09 +0000 (0:00:00.702) 0:01:36.662 ******** 2025-04-17 01:56:39.475952 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.475958 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.475965 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.475972 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.475979 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.475986 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.475992 | orchestrator | 2025-04-17 01:56:39.475999 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.476006 | orchestrator | Thursday 17 April 2025 01:46:10 +0000 (0:00:00.689) 0:01:37.352 ******** 2025-04-17 01:56:39.476013 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476019 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476026 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476033 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476040 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476047 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476057 | orchestrator | 2025-04-17 01:56:39.476077 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.476085 | orchestrator | Thursday 17 April 2025 01:46:11 +0000 (0:00:01.091) 0:01:38.443 ******** 2025-04-17 01:56:39.476091 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476098 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476105 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476112 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476118 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476125 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476132 | orchestrator | 2025-04-17 01:56:39.476141 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.476148 | orchestrator | Thursday 17 April 2025 01:46:12 +0000 (0:00:00.617) 0:01:39.061 ******** 2025-04-17 01:56:39.476155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.476162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.476169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.476175 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476182 | orchestrator | 2025-04-17 01:56:39.476189 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.476196 | orchestrator | Thursday 17 April 2025 01:46:13 +0000 (0:00:00.844) 0:01:39.906 ******** 2025-04-17 01:56:39.476203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.476209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.476216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.476223 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476230 | orchestrator | 2025-04-17 01:56:39.476237 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.476243 | orchestrator | Thursday 17 April 2025 01:46:13 +0000 (0:00:00.391) 0:01:40.297 ******** 2025-04-17 01:56:39.476250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.476257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.476264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.476314 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476330 | orchestrator | 2025-04-17 01:56:39.476337 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.476344 | orchestrator | Thursday 17 April 2025 01:46:13 +0000 (0:00:00.390) 0:01:40.688 ******** 2025-04-17 01:56:39.476351 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476357 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476364 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476371 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476378 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476385 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476392 | orchestrator | 2025-04-17 01:56:39.476399 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.476406 | orchestrator | Thursday 17 April 2025 01:46:14 +0000 (0:00:00.601) 0:01:41.289 ******** 2025-04-17 01:56:39.476412 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.476419 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476426 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.476433 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476440 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.476447 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476453 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.476460 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476467 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.476474 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476481 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.476487 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476494 | orchestrator | 2025-04-17 01:56:39.476501 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.476508 | orchestrator | Thursday 17 April 2025 01:46:15 +0000 (0:00:00.984) 0:01:42.273 ******** 2025-04-17 01:56:39.476515 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476522 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476528 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476535 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476556 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476563 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476570 | orchestrator | 2025-04-17 01:56:39.476577 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.476584 | orchestrator | Thursday 17 April 2025 01:46:16 +0000 (0:00:00.578) 0:01:42.851 ******** 2025-04-17 01:56:39.476591 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476597 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476604 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476611 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476618 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476625 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476631 | orchestrator | 2025-04-17 01:56:39.476638 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.476645 | orchestrator | Thursday 17 April 2025 01:46:16 +0000 (0:00:00.812) 0:01:43.664 ******** 2025-04-17 01:56:39.476652 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.476659 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476666 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.476673 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.476679 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476686 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.476693 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476700 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476707 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.476714 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476720 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.476732 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476739 | orchestrator | 2025-04-17 01:56:39.476746 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.476752 | orchestrator | Thursday 17 April 2025 01:46:17 +0000 (0:00:00.751) 0:01:44.415 ******** 2025-04-17 01:56:39.476759 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476766 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476773 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476780 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.476787 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.476794 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.476800 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.476811 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.476818 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.476825 | orchestrator | 2025-04-17 01:56:39.476832 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.476839 | orchestrator | Thursday 17 April 2025 01:46:18 +0000 (0:00:00.830) 0:01:45.245 ******** 2025-04-17 01:56:39.476845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.476852 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.476859 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.476866 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.476873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-17 01:56:39.476879 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-17 01:56:39.476886 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-17 01:56:39.476893 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.476906 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-17 01:56:39.476953 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-17 01:56:39.476963 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-17 01:56:39.476970 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.476977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.476984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.476991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.476997 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.477004 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:56:39.477011 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:56:39.477018 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:56:39.477025 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.477031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:56:39.477038 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:56:39.477045 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:56:39.477052 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.477059 | orchestrator | 2025-04-17 01:56:39.477065 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.477072 | orchestrator | Thursday 17 April 2025 01:46:20 +0000 (0:00:01.522) 0:01:46.768 ******** 2025-04-17 01:56:39.477079 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.477091 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.477098 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.477105 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.477117 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.477124 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.477131 | orchestrator | 2025-04-17 01:56:39.477137 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-17 01:56:39.477144 | orchestrator | Thursday 17 April 2025 01:46:21 +0000 (0:00:01.204) 0:01:47.973 ******** 2025-04-17 01:56:39.477151 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.477158 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.477165 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.477172 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.477179 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.477186 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.477192 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.477199 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.477206 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.477213 | orchestrator | 2025-04-17 01:56:39.477220 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-17 01:56:39.477227 | orchestrator | Thursday 17 April 2025 01:46:22 +0000 (0:00:01.316) 0:01:49.289 ******** 2025-04-17 01:56:39.477233 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.477240 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.477247 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.477254 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.477261 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.477267 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.477274 | orchestrator | 2025-04-17 01:56:39.477281 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-17 01:56:39.477288 | orchestrator | Thursday 17 April 2025 01:46:23 +0000 (0:00:01.236) 0:01:50.526 ******** 2025-04-17 01:56:39.477295 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.477301 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.477319 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.477326 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.477332 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.477339 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.477346 | orchestrator | 2025-04-17 01:56:39.477353 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-04-17 01:56:39.477360 | orchestrator | Thursday 17 April 2025 01:46:24 +0000 (0:00:01.053) 0:01:51.579 ******** 2025-04-17 01:56:39.477367 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.477374 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.477380 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.477387 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.477394 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.477401 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.477407 | orchestrator | 2025-04-17 01:56:39.477414 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-04-17 01:56:39.477421 | orchestrator | Thursday 17 April 2025 01:46:26 +0000 (0:00:01.465) 0:01:53.045 ******** 2025-04-17 01:56:39.477428 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.477435 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.477442 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.477448 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.477455 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.477462 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.477469 | orchestrator | 2025-04-17 01:56:39.477480 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-04-17 01:56:39.477487 | orchestrator | Thursday 17 April 2025 01:46:28 +0000 (0:00:02.068) 0:01:55.113 ******** 2025-04-17 01:56:39.477494 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.477506 | orchestrator | 2025-04-17 01:56:39.477514 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-04-17 01:56:39.477520 | orchestrator | Thursday 17 April 2025 01:46:29 +0000 (0:00:01.047) 0:01:56.161 ******** 2025-04-17 01:56:39.477527 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.477534 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.477555 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.477562 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.477569 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.477576 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.477582 | orchestrator | 2025-04-17 01:56:39.477633 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-04-17 01:56:39.477643 | orchestrator | Thursday 17 April 2025 01:46:30 +0000 (0:00:00.640) 0:01:56.802 ******** 2025-04-17 01:56:39.477650 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.477657 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.477664 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.477671 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.477678 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.477685 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.477692 | orchestrator | 2025-04-17 01:56:39.477699 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-04-17 01:56:39.477706 | orchestrator | Thursday 17 April 2025 01:46:30 +0000 (0:00:00.955) 0:01:57.757 ******** 2025-04-17 01:56:39.477713 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-17 01:56:39.477720 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-17 01:56:39.477727 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-17 01:56:39.477734 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-17 01:56:39.477740 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-17 01:56:39.477747 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-17 01:56:39.477755 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-17 01:56:39.477762 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-17 01:56:39.477769 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-17 01:56:39.477776 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-17 01:56:39.477783 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-17 01:56:39.477789 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-17 01:56:39.477796 | orchestrator | 2025-04-17 01:56:39.477803 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-04-17 01:56:39.477810 | orchestrator | Thursday 17 April 2025 01:46:32 +0000 (0:00:01.699) 0:01:59.457 ******** 2025-04-17 01:56:39.477817 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.477824 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.477831 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.477842 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.477849 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.477856 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.477863 | orchestrator | 2025-04-17 01:56:39.477870 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-04-17 01:56:39.477876 | orchestrator | Thursday 17 April 2025 01:46:34 +0000 (0:00:01.317) 0:02:00.774 ******** 2025-04-17 01:56:39.477883 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.477977 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.477985 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.477992 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.478004 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.478011 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.478039 | orchestrator | 2025-04-17 01:56:39.478046 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-04-17 01:56:39.478053 | orchestrator | Thursday 17 April 2025 01:46:34 +0000 (0:00:00.797) 0:02:01.572 ******** 2025-04-17 01:56:39.478060 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478067 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.478074 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.478080 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.478087 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.478094 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.478101 | orchestrator | 2025-04-17 01:56:39.478108 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-04-17 01:56:39.478115 | orchestrator | Thursday 17 April 2025 01:46:35 +0000 (0:00:00.449) 0:02:02.022 ******** 2025-04-17 01:56:39.478122 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.478129 | orchestrator | 2025-04-17 01:56:39.478136 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-04-17 01:56:39.478142 | orchestrator | Thursday 17 April 2025 01:46:36 +0000 (0:00:01.089) 0:02:03.111 ******** 2025-04-17 01:56:39.478149 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.478156 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.478163 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.478170 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.478177 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.478183 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.478190 | orchestrator | 2025-04-17 01:56:39.478197 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-04-17 01:56:39.478204 | orchestrator | Thursday 17 April 2025 01:47:06 +0000 (0:00:29.673) 0:02:32.785 ******** 2025-04-17 01:56:39.478215 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-17 01:56:39.478222 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-17 01:56:39.478229 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-17 01:56:39.478236 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478242 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-17 01:56:39.478250 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-17 01:56:39.478301 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-17 01:56:39.478312 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.478319 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-17 01:56:39.478325 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-17 01:56:39.478332 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-17 01:56:39.478339 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.478346 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-17 01:56:39.478353 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-17 01:56:39.478360 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-17 01:56:39.478366 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.478373 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-17 01:56:39.478380 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-17 01:56:39.478387 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-17 01:56:39.478399 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.478406 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-17 01:56:39.478413 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-17 01:56:39.478419 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-17 01:56:39.478426 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.478433 | orchestrator | 2025-04-17 01:56:39.478440 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-04-17 01:56:39.478446 | orchestrator | Thursday 17 April 2025 01:47:07 +0000 (0:00:01.027) 0:02:33.813 ******** 2025-04-17 01:56:39.478453 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478460 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.478467 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.478474 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.478481 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.478488 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.478495 | orchestrator | 2025-04-17 01:56:39.478502 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-04-17 01:56:39.478508 | orchestrator | Thursday 17 April 2025 01:47:07 +0000 (0:00:00.686) 0:02:34.499 ******** 2025-04-17 01:56:39.478515 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478522 | orchestrator | 2025-04-17 01:56:39.478529 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-04-17 01:56:39.478536 | orchestrator | Thursday 17 April 2025 01:47:07 +0000 (0:00:00.165) 0:02:34.665 ******** 2025-04-17 01:56:39.478580 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478587 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.478594 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.478601 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.478608 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.478614 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.478621 | orchestrator | 2025-04-17 01:56:39.478628 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-04-17 01:56:39.478635 | orchestrator | Thursday 17 April 2025 01:47:08 +0000 (0:00:01.011) 0:02:35.676 ******** 2025-04-17 01:56:39.478642 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478648 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.478655 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.478662 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.478669 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.478675 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.478682 | orchestrator | 2025-04-17 01:56:39.478689 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-04-17 01:56:39.478696 | orchestrator | Thursday 17 April 2025 01:47:09 +0000 (0:00:00.872) 0:02:36.549 ******** 2025-04-17 01:56:39.478703 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478710 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.478716 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.478723 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.478730 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.478741 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.478748 | orchestrator | 2025-04-17 01:56:39.478755 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-04-17 01:56:39.478762 | orchestrator | Thursday 17 April 2025 01:47:10 +0000 (0:00:00.839) 0:02:37.389 ******** 2025-04-17 01:56:39.478768 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.478775 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.478782 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.478789 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.478796 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.478803 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.478809 | orchestrator | 2025-04-17 01:56:39.478816 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-04-17 01:56:39.478845 | orchestrator | Thursday 17 April 2025 01:47:12 +0000 (0:00:01.751) 0:02:39.141 ******** 2025-04-17 01:56:39.478853 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.478859 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.478866 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.478873 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.478879 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.478886 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.478893 | orchestrator | 2025-04-17 01:56:39.478899 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-04-17 01:56:39.478905 | orchestrator | Thursday 17 April 2025 01:47:13 +0000 (0:00:00.752) 0:02:39.894 ******** 2025-04-17 01:56:39.478911 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.478919 | orchestrator | 2025-04-17 01:56:39.478970 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-04-17 01:56:39.478980 | orchestrator | Thursday 17 April 2025 01:47:14 +0000 (0:00:01.053) 0:02:40.947 ******** 2025-04-17 01:56:39.478987 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.478993 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.479000 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.479007 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.479014 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.479021 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.479028 | orchestrator | 2025-04-17 01:56:39.479035 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-04-17 01:56:39.479042 | orchestrator | Thursday 17 April 2025 01:47:14 +0000 (0:00:00.617) 0:02:41.565 ******** 2025-04-17 01:56:39.479048 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.479054 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.479060 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.479066 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.479072 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.479078 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.479084 | orchestrator | 2025-04-17 01:56:39.479090 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-04-17 01:56:39.479096 | orchestrator | Thursday 17 April 2025 01:47:15 +0000 (0:00:00.643) 0:02:42.208 ******** 2025-04-17 01:56:39.479102 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.479108 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.479114 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.479120 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.479126 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.479132 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.479138 | orchestrator | 2025-04-17 01:56:39.479144 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-04-17 01:56:39.479151 | orchestrator | Thursday 17 April 2025 01:47:15 +0000 (0:00:00.466) 0:02:42.675 ******** 2025-04-17 01:56:39.479156 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.479163 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.479169 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.479175 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.479181 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.479187 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.479193 | orchestrator | 2025-04-17 01:56:39.479199 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-04-17 01:56:39.479205 | orchestrator | Thursday 17 April 2025 01:47:16 +0000 (0:00:00.637) 0:02:43.312 ******** 2025-04-17 01:56:39.479211 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.479217 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.479223 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.479229 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.479241 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.479247 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.479254 | orchestrator | 2025-04-17 01:56:39.479260 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-04-17 01:56:39.479266 | orchestrator | Thursday 17 April 2025 01:47:17 +0000 (0:00:00.635) 0:02:43.948 ******** 2025-04-17 01:56:39.479272 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.479278 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.479284 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.479290 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.479296 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.479302 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.479308 | orchestrator | 2025-04-17 01:56:39.479315 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-04-17 01:56:39.479321 | orchestrator | Thursday 17 April 2025 01:47:18 +0000 (0:00:00.849) 0:02:44.797 ******** 2025-04-17 01:56:39.479327 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.479333 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.479342 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.479348 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.479354 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.479360 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.479366 | orchestrator | 2025-04-17 01:56:39.479373 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-04-17 01:56:39.479379 | orchestrator | Thursday 17 April 2025 01:47:18 +0000 (0:00:00.603) 0:02:45.400 ******** 2025-04-17 01:56:39.479385 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.479391 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.479397 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.479403 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.479409 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.479415 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.479421 | orchestrator | 2025-04-17 01:56:39.479427 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.479433 | orchestrator | Thursday 17 April 2025 01:47:19 +0000 (0:00:01.150) 0:02:46.551 ******** 2025-04-17 01:56:39.479440 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.479446 | orchestrator | 2025-04-17 01:56:39.479452 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-04-17 01:56:39.479458 | orchestrator | Thursday 17 April 2025 01:47:20 +0000 (0:00:01.157) 0:02:47.708 ******** 2025-04-17 01:56:39.479464 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-04-17 01:56:39.479470 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-04-17 01:56:39.479476 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-04-17 01:56:39.479482 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-04-17 01:56:39.479488 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-04-17 01:56:39.479498 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-04-17 01:56:39.479504 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-04-17 01:56:39.479510 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-04-17 01:56:39.479568 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-04-17 01:56:39.479577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-04-17 01:56:39.479583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-04-17 01:56:39.479590 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-04-17 01:56:39.479596 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-04-17 01:56:39.479602 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-04-17 01:56:39.479608 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-04-17 01:56:39.479619 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-04-17 01:56:39.479625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-04-17 01:56:39.479631 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-04-17 01:56:39.479638 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-04-17 01:56:39.479644 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-04-17 01:56:39.479650 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-04-17 01:56:39.479656 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-04-17 01:56:39.479662 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-04-17 01:56:39.479668 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-04-17 01:56:39.479674 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-04-17 01:56:39.479680 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-04-17 01:56:39.479686 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-04-17 01:56:39.479692 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-04-17 01:56:39.479698 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-04-17 01:56:39.479705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-04-17 01:56:39.479711 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-04-17 01:56:39.479717 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-04-17 01:56:39.479723 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-04-17 01:56:39.479729 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-04-17 01:56:39.479735 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-04-17 01:56:39.479741 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-04-17 01:56:39.479750 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-04-17 01:56:39.479757 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-04-17 01:56:39.479763 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-04-17 01:56:39.479769 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-17 01:56:39.479775 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-04-17 01:56:39.479781 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-04-17 01:56:39.479787 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-04-17 01:56:39.479793 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-17 01:56:39.479799 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-17 01:56:39.479805 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-17 01:56:39.479811 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-17 01:56:39.479817 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-17 01:56:39.479824 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-17 01:56:39.479830 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-17 01:56:39.479836 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-17 01:56:39.479842 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-17 01:56:39.479848 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-17 01:56:39.479854 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-17 01:56:39.479860 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-17 01:56:39.479866 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-17 01:56:39.479872 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-17 01:56:39.479882 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-17 01:56:39.479889 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-17 01:56:39.479895 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-17 01:56:39.479901 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-17 01:56:39.479907 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-17 01:56:39.479913 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-17 01:56:39.479920 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-17 01:56:39.479926 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-17 01:56:39.479932 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-17 01:56:39.479938 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-17 01:56:39.479989 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-17 01:56:39.480000 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-17 01:56:39.480007 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-17 01:56:39.480013 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-17 01:56:39.480020 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-17 01:56:39.480027 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-17 01:56:39.480033 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-17 01:56:39.480040 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-17 01:56:39.480047 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-17 01:56:39.480053 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-04-17 01:56:39.480060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-17 01:56:39.480066 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-17 01:56:39.480073 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-04-17 01:56:39.480079 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-04-17 01:56:39.480086 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-04-17 01:56:39.480093 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-04-17 01:56:39.480099 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-04-17 01:56:39.480106 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-04-17 01:56:39.480112 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-04-17 01:56:39.480119 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-04-17 01:56:39.480125 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-04-17 01:56:39.480132 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-04-17 01:56:39.480138 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-04-17 01:56:39.480145 | orchestrator | 2025-04-17 01:56:39.480152 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.480158 | orchestrator | Thursday 17 April 2025 01:47:26 +0000 (0:00:05.658) 0:02:53.366 ******** 2025-04-17 01:56:39.480165 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480171 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480178 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480185 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.480193 | orchestrator | 2025-04-17 01:56:39.480199 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-04-17 01:56:39.480217 | orchestrator | Thursday 17 April 2025 01:47:27 +0000 (0:00:01.281) 0:02:54.648 ******** 2025-04-17 01:56:39.480224 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.480231 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.480237 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.480244 | orchestrator | 2025-04-17 01:56:39.480251 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-04-17 01:56:39.480257 | orchestrator | Thursday 17 April 2025 01:47:28 +0000 (0:00:01.071) 0:02:55.719 ******** 2025-04-17 01:56:39.480264 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.480271 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.480277 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.480284 | orchestrator | 2025-04-17 01:56:39.480291 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.480297 | orchestrator | Thursday 17 April 2025 01:47:30 +0000 (0:00:01.187) 0:02:56.906 ******** 2025-04-17 01:56:39.480304 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480310 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480317 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480324 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.480331 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.480337 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.480344 | orchestrator | 2025-04-17 01:56:39.480351 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.480357 | orchestrator | Thursday 17 April 2025 01:47:31 +0000 (0:00:00.972) 0:02:57.878 ******** 2025-04-17 01:56:39.480364 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480370 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480377 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480384 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.480390 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.480397 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.480403 | orchestrator | 2025-04-17 01:56:39.480410 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.480417 | orchestrator | Thursday 17 April 2025 01:47:31 +0000 (0:00:00.742) 0:02:58.621 ******** 2025-04-17 01:56:39.480424 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480468 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480478 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480485 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.480492 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.480499 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.480505 | orchestrator | 2025-04-17 01:56:39.480512 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.480519 | orchestrator | Thursday 17 April 2025 01:47:32 +0000 (0:00:00.823) 0:02:59.445 ******** 2025-04-17 01:56:39.480526 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480532 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480551 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480558 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.480564 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.480571 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.480577 | orchestrator | 2025-04-17 01:56:39.480583 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.480595 | orchestrator | Thursday 17 April 2025 01:47:33 +0000 (0:00:00.766) 0:03:00.211 ******** 2025-04-17 01:56:39.480601 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480607 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480613 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480619 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.480626 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.480632 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.480638 | orchestrator | 2025-04-17 01:56:39.480644 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.480650 | orchestrator | Thursday 17 April 2025 01:47:34 +0000 (0:00:00.932) 0:03:01.144 ******** 2025-04-17 01:56:39.480657 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480663 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480669 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480675 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.480681 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.480687 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.480693 | orchestrator | 2025-04-17 01:56:39.480700 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.480706 | orchestrator | Thursday 17 April 2025 01:47:35 +0000 (0:00:00.677) 0:03:01.822 ******** 2025-04-17 01:56:39.480712 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480718 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480724 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480734 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.480741 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.480747 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.480753 | orchestrator | 2025-04-17 01:56:39.480760 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.480766 | orchestrator | Thursday 17 April 2025 01:47:36 +0000 (0:00:01.088) 0:03:02.911 ******** 2025-04-17 01:56:39.480772 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480778 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480784 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480790 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.480796 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.480803 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.480809 | orchestrator | 2025-04-17 01:56:39.480815 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.480821 | orchestrator | Thursday 17 April 2025 01:47:36 +0000 (0:00:00.641) 0:03:03.552 ******** 2025-04-17 01:56:39.480827 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480833 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480839 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480845 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.480851 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.480857 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.480863 | orchestrator | 2025-04-17 01:56:39.480870 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.480876 | orchestrator | Thursday 17 April 2025 01:47:38 +0000 (0:00:01.906) 0:03:05.458 ******** 2025-04-17 01:56:39.480882 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480888 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480894 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.480900 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.480906 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.480912 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.480928 | orchestrator | 2025-04-17 01:56:39.480934 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.480940 | orchestrator | Thursday 17 April 2025 01:47:39 +0000 (0:00:00.546) 0:03:06.005 ******** 2025-04-17 01:56:39.480951 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.480958 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.480964 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.480970 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.480979 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.480985 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.480991 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.480998 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.481004 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481010 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.481016 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.481022 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.481029 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.481035 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.481041 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.481047 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.481053 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.481059 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.481066 | orchestrator | 2025-04-17 01:56:39.481072 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.481117 | orchestrator | Thursday 17 April 2025 01:47:39 +0000 (0:00:00.717) 0:03:06.722 ******** 2025-04-17 01:56:39.481126 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-17 01:56:39.481135 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-17 01:56:39.481141 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481147 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-17 01:56:39.481153 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-17 01:56:39.481159 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481166 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-17 01:56:39.481172 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-17 01:56:39.481178 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481184 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-04-17 01:56:39.481190 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-04-17 01:56:39.481196 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-04-17 01:56:39.481202 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-04-17 01:56:39.481208 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-04-17 01:56:39.481214 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-04-17 01:56:39.481220 | orchestrator | 2025-04-17 01:56:39.481226 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.481233 | orchestrator | Thursday 17 April 2025 01:47:40 +0000 (0:00:00.648) 0:03:07.371 ******** 2025-04-17 01:56:39.481239 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481245 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481251 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481257 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.481263 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.481269 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.481275 | orchestrator | 2025-04-17 01:56:39.481281 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.481287 | orchestrator | Thursday 17 April 2025 01:47:41 +0000 (0:00:00.767) 0:03:08.139 ******** 2025-04-17 01:56:39.481293 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481300 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481306 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481312 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.481322 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.481328 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.481334 | orchestrator | 2025-04-17 01:56:39.481340 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.481346 | orchestrator | Thursday 17 April 2025 01:47:41 +0000 (0:00:00.552) 0:03:08.691 ******** 2025-04-17 01:56:39.481352 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481358 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481364 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481370 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.481376 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.481382 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.481388 | orchestrator | 2025-04-17 01:56:39.481394 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.481401 | orchestrator | Thursday 17 April 2025 01:47:42 +0000 (0:00:00.701) 0:03:09.393 ******** 2025-04-17 01:56:39.481407 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481413 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481422 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481428 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.481434 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.481440 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.481446 | orchestrator | 2025-04-17 01:56:39.481453 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.481459 | orchestrator | Thursday 17 April 2025 01:47:43 +0000 (0:00:00.607) 0:03:10.001 ******** 2025-04-17 01:56:39.481465 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481471 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481477 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481483 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.481489 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.481495 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.481501 | orchestrator | 2025-04-17 01:56:39.481512 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.481518 | orchestrator | Thursday 17 April 2025 01:47:43 +0000 (0:00:00.707) 0:03:10.708 ******** 2025-04-17 01:56:39.481524 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481530 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481536 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481557 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.481564 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.481570 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.481576 | orchestrator | 2025-04-17 01:56:39.481582 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.481588 | orchestrator | Thursday 17 April 2025 01:47:44 +0000 (0:00:00.640) 0:03:11.349 ******** 2025-04-17 01:56:39.481594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.481600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.481607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.481613 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481619 | orchestrator | 2025-04-17 01:56:39.481625 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.481631 | orchestrator | Thursday 17 April 2025 01:47:45 +0000 (0:00:00.506) 0:03:11.855 ******** 2025-04-17 01:56:39.481637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.481643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.481649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.481655 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481661 | orchestrator | 2025-04-17 01:56:39.481705 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.481719 | orchestrator | Thursday 17 April 2025 01:47:45 +0000 (0:00:00.693) 0:03:12.549 ******** 2025-04-17 01:56:39.481725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.481731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.481737 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.481743 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481749 | orchestrator | 2025-04-17 01:56:39.481755 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.481761 | orchestrator | Thursday 17 April 2025 01:47:46 +0000 (0:00:00.369) 0:03:12.918 ******** 2025-04-17 01:56:39.481767 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481774 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481780 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481786 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.481792 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.481798 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.481804 | orchestrator | 2025-04-17 01:56:39.481810 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.481816 | orchestrator | Thursday 17 April 2025 01:47:46 +0000 (0:00:00.602) 0:03:13.521 ******** 2025-04-17 01:56:39.481822 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.481829 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481835 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.481841 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481847 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.481853 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481859 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-17 01:56:39.481865 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-17 01:56:39.481871 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-17 01:56:39.481877 | orchestrator | 2025-04-17 01:56:39.481883 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.481889 | orchestrator | Thursday 17 April 2025 01:47:47 +0000 (0:00:01.195) 0:03:14.716 ******** 2025-04-17 01:56:39.481895 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481901 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481907 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481913 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.481919 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.481925 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.481931 | orchestrator | 2025-04-17 01:56:39.481938 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.481944 | orchestrator | Thursday 17 April 2025 01:47:48 +0000 (0:00:00.665) 0:03:15.381 ******** 2025-04-17 01:56:39.481950 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.481956 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.481962 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.481968 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.481974 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.481980 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.481986 | orchestrator | 2025-04-17 01:56:39.481992 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.481998 | orchestrator | Thursday 17 April 2025 01:47:49 +0000 (0:00:00.918) 0:03:16.299 ******** 2025-04-17 01:56:39.482004 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.482011 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.482033 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.482040 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.482046 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.482052 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.482058 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.482068 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482074 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.482080 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.482095 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.482101 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.482107 | orchestrator | 2025-04-17 01:56:39.482114 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.482120 | orchestrator | Thursday 17 April 2025 01:47:50 +0000 (0:00:00.967) 0:03:17.267 ******** 2025-04-17 01:56:39.482126 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.482132 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.482141 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.482147 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.482153 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482160 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.482166 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.482172 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.482178 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.482184 | orchestrator | 2025-04-17 01:56:39.482190 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.482196 | orchestrator | Thursday 17 April 2025 01:47:51 +0000 (0:00:00.723) 0:03:17.990 ******** 2025-04-17 01:56:39.482202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.482208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.482214 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.482220 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.482226 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-17 01:56:39.482272 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-17 01:56:39.482281 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-17 01:56:39.482287 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.482294 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-17 01:56:39.482300 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-17 01:56:39.482306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-17 01:56:39.482312 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.482318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.482324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.482330 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:56:39.482336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.482342 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:56:39.482354 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:56:39.482360 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:56:39.482366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:56:39.482372 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.482379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:56:39.482385 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.482391 | orchestrator | 2025-04-17 01:56:39.482397 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.482403 | orchestrator | Thursday 17 April 2025 01:47:52 +0000 (0:00:01.194) 0:03:19.185 ******** 2025-04-17 01:56:39.482413 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.482420 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.482426 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.482432 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.482438 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.482444 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.482449 | orchestrator | 2025-04-17 01:56:39.482455 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-17 01:56:39.482462 | orchestrator | Thursday 17 April 2025 01:47:56 +0000 (0:00:03.832) 0:03:23.017 ******** 2025-04-17 01:56:39.482467 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.482473 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.482479 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.482485 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.482491 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.482497 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.482503 | orchestrator | 2025-04-17 01:56:39.482509 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-17 01:56:39.482515 | orchestrator | Thursday 17 April 2025 01:47:57 +0000 (0:00:00.968) 0:03:23.986 ******** 2025-04-17 01:56:39.482521 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482527 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.482533 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.482576 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.482583 | orchestrator | 2025-04-17 01:56:39.482590 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-17 01:56:39.482596 | orchestrator | Thursday 17 April 2025 01:47:58 +0000 (0:00:00.886) 0:03:24.873 ******** 2025-04-17 01:56:39.482602 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.482608 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.482614 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.482620 | orchestrator | 2025-04-17 01:56:39.482626 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-04-17 01:56:39.482633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.482639 | orchestrator | 2025-04-17 01:56:39.482645 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-17 01:56:39.482654 | orchestrator | Thursday 17 April 2025 01:47:58 +0000 (0:00:00.849) 0:03:25.722 ******** 2025-04-17 01:56:39.482661 | orchestrator | 2025-04-17 01:56:39.482667 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-04-17 01:56:39.482673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.482679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.482685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.482691 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482697 | orchestrator | 2025-04-17 01:56:39.482703 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-17 01:56:39.482709 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.482715 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.482721 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.482727 | orchestrator | 2025-04-17 01:56:39.482733 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-17 01:56:39.482739 | orchestrator | Thursday 17 April 2025 01:48:00 +0000 (0:00:01.329) 0:03:27.051 ******** 2025-04-17 01:56:39.482745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.482754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.482761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.482767 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.482772 | orchestrator | 2025-04-17 01:56:39.482778 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-17 01:56:39.482788 | orchestrator | Thursday 17 April 2025 01:48:01 +0000 (0:00:00.907) 0:03:27.959 ******** 2025-04-17 01:56:39.482794 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.482800 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.482805 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.482811 | orchestrator | 2025-04-17 01:56:39.482817 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-04-17 01:56:39.482862 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482870 | orchestrator | 2025-04-17 01:56:39.482876 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-17 01:56:39.482882 | orchestrator | Thursday 17 April 2025 01:48:01 +0000 (0:00:00.604) 0:03:28.564 ******** 2025-04-17 01:56:39.482888 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.482894 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.482899 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.482905 | orchestrator | 2025-04-17 01:56:39.482911 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-04-17 01:56:39.482917 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482922 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.482928 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.482934 | orchestrator | 2025-04-17 01:56:39.482939 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-17 01:56:39.482945 | orchestrator | Thursday 17 April 2025 01:48:02 +0000 (0:00:00.531) 0:03:29.095 ******** 2025-04-17 01:56:39.482951 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.482957 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.482963 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.482968 | orchestrator | 2025-04-17 01:56:39.482974 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-04-17 01:56:39.482980 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.482985 | orchestrator | 2025-04-17 01:56:39.482991 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-17 01:56:39.482997 | orchestrator | Thursday 17 April 2025 01:48:03 +0000 (0:00:00.823) 0:03:29.918 ******** 2025-04-17 01:56:39.483003 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.483008 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.483014 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.483020 | orchestrator | 2025-04-17 01:56:39.483026 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-04-17 01:56:39.483031 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483041 | orchestrator | 2025-04-17 01:56:39.483046 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-17 01:56:39.483052 | orchestrator | Thursday 17 April 2025 01:48:03 +0000 (0:00:00.727) 0:03:30.645 ******** 2025-04-17 01:56:39.483058 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483064 | orchestrator | 2025-04-17 01:56:39.483069 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-17 01:56:39.483075 | orchestrator | Thursday 17 April 2025 01:48:04 +0000 (0:00:00.129) 0:03:30.775 ******** 2025-04-17 01:56:39.483081 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.483087 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.483092 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.483098 | orchestrator | 2025-04-17 01:56:39.483104 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-04-17 01:56:39.483110 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483115 | orchestrator | 2025-04-17 01:56:39.483121 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-17 01:56:39.483127 | orchestrator | Thursday 17 April 2025 01:48:04 +0000 (0:00:00.975) 0:03:31.750 ******** 2025-04-17 01:56:39.483133 | orchestrator | 2025-04-17 01:56:39.483138 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-04-17 01:56:39.483153 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483164 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.483170 | orchestrator | 2025-04-17 01:56:39.483175 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-17 01:56:39.483181 | orchestrator | Thursday 17 April 2025 01:48:05 +0000 (0:00:00.765) 0:03:32.516 ******** 2025-04-17 01:56:39.483187 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.483193 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.483199 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.483204 | orchestrator | 2025-04-17 01:56:39.483210 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-04-17 01:56:39.483216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.483222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.483227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.483233 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483239 | orchestrator | 2025-04-17 01:56:39.483245 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-17 01:56:39.483250 | orchestrator | Thursday 17 April 2025 01:48:06 +0000 (0:00:01.117) 0:03:33.633 ******** 2025-04-17 01:56:39.483256 | orchestrator | 2025-04-17 01:56:39.483262 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-04-17 01:56:39.483267 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483273 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.483279 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.483285 | orchestrator | 2025-04-17 01:56:39.483293 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-17 01:56:39.483299 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.483305 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.483310 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.483316 | orchestrator | 2025-04-17 01:56:39.483322 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-17 01:56:39.483328 | orchestrator | Thursday 17 April 2025 01:48:08 +0000 (0:00:01.211) 0:03:34.845 ******** 2025-04-17 01:56:39.483333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.483339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.483345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.483350 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.483356 | orchestrator | 2025-04-17 01:56:39.483362 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-17 01:56:39.483368 | orchestrator | Thursday 17 April 2025 01:48:08 +0000 (0:00:00.852) 0:03:35.698 ******** 2025-04-17 01:56:39.483373 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.483379 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.483385 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.483391 | orchestrator | 2025-04-17 01:56:39.483434 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-04-17 01:56:39.483443 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483449 | orchestrator | 2025-04-17 01:56:39.483454 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-17 01:56:39.483460 | orchestrator | Thursday 17 April 2025 01:48:10 +0000 (0:00:01.302) 0:03:37.000 ******** 2025-04-17 01:56:39.483466 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.483472 | orchestrator | 2025-04-17 01:56:39.483477 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-17 01:56:39.483483 | orchestrator | Thursday 17 April 2025 01:48:10 +0000 (0:00:00.586) 0:03:37.586 ******** 2025-04-17 01:56:39.483489 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.483494 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.483500 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.483513 | orchestrator | 2025-04-17 01:56:39.483519 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-04-17 01:56:39.483525 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.483531 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.483536 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.483557 | orchestrator | 2025-04-17 01:56:39.483563 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-17 01:56:39.483569 | orchestrator | Thursday 17 April 2025 01:48:11 +0000 (0:00:00.940) 0:03:38.526 ******** 2025-04-17 01:56:39.483575 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.483581 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.483586 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.483592 | orchestrator | 2025-04-17 01:56:39.483598 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.483603 | orchestrator | Thursday 17 April 2025 01:48:12 +0000 (0:00:01.124) 0:03:39.651 ******** 2025-04-17 01:56:39.483609 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.483615 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.483621 | orchestrator | 2025-04-17 01:56:39.483626 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-17 01:56:39.483632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.483638 | orchestrator | 2025-04-17 01:56:39.483644 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.483649 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.483655 | orchestrator | 2025-04-17 01:56:39.483661 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-17 01:56:39.483666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.483672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.483678 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483683 | orchestrator | 2025-04-17 01:56:39.483689 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-17 01:56:39.483695 | orchestrator | Thursday 17 April 2025 01:48:14 +0000 (0:00:01.218) 0:03:40.870 ******** 2025-04-17 01:56:39.483701 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.483706 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.483712 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.483718 | orchestrator | 2025-04-17 01:56:39.483723 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-17 01:56:39.483729 | orchestrator | Thursday 17 April 2025 01:48:14 +0000 (0:00:00.817) 0:03:41.688 ******** 2025-04-17 01:56:39.483735 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.483741 | orchestrator | 2025-04-17 01:56:39.483747 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-17 01:56:39.483753 | orchestrator | Thursday 17 April 2025 01:48:15 +0000 (0:00:00.480) 0:03:42.168 ******** 2025-04-17 01:56:39.483758 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.483764 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.483770 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.483775 | orchestrator | 2025-04-17 01:56:39.483781 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-17 01:56:39.483787 | orchestrator | Thursday 17 April 2025 01:48:15 +0000 (0:00:00.421) 0:03:42.589 ******** 2025-04-17 01:56:39.483793 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.483798 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.483804 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.483810 | orchestrator | 2025-04-17 01:56:39.483815 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-17 01:56:39.483821 | orchestrator | Thursday 17 April 2025 01:48:16 +0000 (0:00:01.100) 0:03:43.689 ******** 2025-04-17 01:56:39.483827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.483833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.483843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.483849 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483854 | orchestrator | 2025-04-17 01:56:39.483860 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-17 01:56:39.483866 | orchestrator | Thursday 17 April 2025 01:48:17 +0000 (0:00:00.546) 0:03:44.236 ******** 2025-04-17 01:56:39.483872 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.483877 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.483883 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.483889 | orchestrator | 2025-04-17 01:56:39.483894 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-17 01:56:39.483900 | orchestrator | Thursday 17 April 2025 01:48:17 +0000 (0:00:00.291) 0:03:44.528 ******** 2025-04-17 01:56:39.483906 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483911 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.483917 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.483923 | orchestrator | 2025-04-17 01:56:39.483932 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-17 01:56:39.483937 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.323) 0:03:44.852 ******** 2025-04-17 01:56:39.483943 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.483949 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.483996 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.484005 | orchestrator | 2025-04-17 01:56:39.484011 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-17 01:56:39.484017 | orchestrator | Thursday 17 April 2025 01:48:18 +0000 (0:00:00.567) 0:03:45.419 ******** 2025-04-17 01:56:39.484023 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.484029 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.484035 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.484040 | orchestrator | 2025-04-17 01:56:39.484046 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.484052 | orchestrator | Thursday 17 April 2025 01:48:19 +0000 (0:00:00.358) 0:03:45.778 ******** 2025-04-17 01:56:39.484058 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.484063 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.484069 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.484075 | orchestrator | 2025-04-17 01:56:39.484081 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-04-17 01:56:39.484086 | orchestrator | 2025-04-17 01:56:39.484092 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-17 01:56:39.484098 | orchestrator | Thursday 17 April 2025 01:48:21 +0000 (0:00:02.157) 0:03:47.935 ******** 2025-04-17 01:56:39.484104 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.484110 | orchestrator | 2025-04-17 01:56:39.484116 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-17 01:56:39.484122 | orchestrator | Thursday 17 April 2025 01:48:21 +0000 (0:00:00.579) 0:03:48.514 ******** 2025-04-17 01:56:39.484128 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.484133 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.484139 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.484145 | orchestrator | 2025-04-17 01:56:39.484151 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-17 01:56:39.484156 | orchestrator | Thursday 17 April 2025 01:48:22 +0000 (0:00:00.702) 0:03:49.217 ******** 2025-04-17 01:56:39.484162 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484168 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484174 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484179 | orchestrator | 2025-04-17 01:56:39.484185 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-17 01:56:39.484191 | orchestrator | Thursday 17 April 2025 01:48:22 +0000 (0:00:00.242) 0:03:49.459 ******** 2025-04-17 01:56:39.484202 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484208 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484214 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484220 | orchestrator | 2025-04-17 01:56:39.484226 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-17 01:56:39.484231 | orchestrator | Thursday 17 April 2025 01:48:23 +0000 (0:00:00.362) 0:03:49.822 ******** 2025-04-17 01:56:39.484237 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484243 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484248 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484254 | orchestrator | 2025-04-17 01:56:39.484260 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-17 01:56:39.484266 | orchestrator | Thursday 17 April 2025 01:48:23 +0000 (0:00:00.265) 0:03:50.088 ******** 2025-04-17 01:56:39.484272 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.484277 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.484283 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.484289 | orchestrator | 2025-04-17 01:56:39.484295 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-17 01:56:39.484300 | orchestrator | Thursday 17 April 2025 01:48:24 +0000 (0:00:00.747) 0:03:50.835 ******** 2025-04-17 01:56:39.484306 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484312 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484318 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484324 | orchestrator | 2025-04-17 01:56:39.484329 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-17 01:56:39.484335 | orchestrator | Thursday 17 April 2025 01:48:24 +0000 (0:00:00.468) 0:03:51.303 ******** 2025-04-17 01:56:39.484341 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484346 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484352 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484368 | orchestrator | 2025-04-17 01:56:39.484374 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-17 01:56:39.484379 | orchestrator | Thursday 17 April 2025 01:48:24 +0000 (0:00:00.330) 0:03:51.634 ******** 2025-04-17 01:56:39.484385 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484391 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484396 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484402 | orchestrator | 2025-04-17 01:56:39.484408 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-17 01:56:39.484414 | orchestrator | Thursday 17 April 2025 01:48:25 +0000 (0:00:00.363) 0:03:51.997 ******** 2025-04-17 01:56:39.484419 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484425 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484431 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484436 | orchestrator | 2025-04-17 01:56:39.484442 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-17 01:56:39.484448 | orchestrator | Thursday 17 April 2025 01:48:25 +0000 (0:00:00.305) 0:03:52.303 ******** 2025-04-17 01:56:39.484454 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484459 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484465 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484471 | orchestrator | 2025-04-17 01:56:39.484476 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-17 01:56:39.484482 | orchestrator | Thursday 17 April 2025 01:48:26 +0000 (0:00:00.557) 0:03:52.861 ******** 2025-04-17 01:56:39.484488 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.484494 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.484499 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.484505 | orchestrator | 2025-04-17 01:56:39.484511 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-17 01:56:39.484569 | orchestrator | Thursday 17 April 2025 01:48:26 +0000 (0:00:00.719) 0:03:53.581 ******** 2025-04-17 01:56:39.484579 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484589 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484595 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484601 | orchestrator | 2025-04-17 01:56:39.484607 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-17 01:56:39.484613 | orchestrator | Thursday 17 April 2025 01:48:27 +0000 (0:00:00.274) 0:03:53.855 ******** 2025-04-17 01:56:39.484618 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.484624 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.484630 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.484636 | orchestrator | 2025-04-17 01:56:39.484642 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-17 01:56:39.484647 | orchestrator | Thursday 17 April 2025 01:48:27 +0000 (0:00:00.300) 0:03:54.156 ******** 2025-04-17 01:56:39.484653 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484659 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484665 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484671 | orchestrator | 2025-04-17 01:56:39.484676 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-17 01:56:39.484682 | orchestrator | Thursday 17 April 2025 01:48:27 +0000 (0:00:00.537) 0:03:54.693 ******** 2025-04-17 01:56:39.484688 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484697 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484703 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484709 | orchestrator | 2025-04-17 01:56:39.484715 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-17 01:56:39.484721 | orchestrator | Thursday 17 April 2025 01:48:28 +0000 (0:00:00.334) 0:03:55.028 ******** 2025-04-17 01:56:39.484726 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484732 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484738 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484744 | orchestrator | 2025-04-17 01:56:39.484750 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-17 01:56:39.484755 | orchestrator | Thursday 17 April 2025 01:48:28 +0000 (0:00:00.317) 0:03:55.345 ******** 2025-04-17 01:56:39.484761 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484767 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484773 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484779 | orchestrator | 2025-04-17 01:56:39.484785 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-17 01:56:39.484790 | orchestrator | Thursday 17 April 2025 01:48:28 +0000 (0:00:00.309) 0:03:55.655 ******** 2025-04-17 01:56:39.484796 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484802 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484808 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484814 | orchestrator | 2025-04-17 01:56:39.484819 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-17 01:56:39.484825 | orchestrator | Thursday 17 April 2025 01:48:29 +0000 (0:00:00.575) 0:03:56.231 ******** 2025-04-17 01:56:39.484831 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.484837 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.484843 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.484848 | orchestrator | 2025-04-17 01:56:39.484854 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-17 01:56:39.484860 | orchestrator | Thursday 17 April 2025 01:48:29 +0000 (0:00:00.373) 0:03:56.604 ******** 2025-04-17 01:56:39.484866 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.484872 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.484878 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.484883 | orchestrator | 2025-04-17 01:56:39.484889 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.484895 | orchestrator | Thursday 17 April 2025 01:48:30 +0000 (0:00:00.388) 0:03:56.993 ******** 2025-04-17 01:56:39.484901 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484907 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484916 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484922 | orchestrator | 2025-04-17 01:56:39.484928 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.484934 | orchestrator | Thursday 17 April 2025 01:48:30 +0000 (0:00:00.534) 0:03:57.528 ******** 2025-04-17 01:56:39.484939 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484945 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484951 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484957 | orchestrator | 2025-04-17 01:56:39.484962 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.484968 | orchestrator | Thursday 17 April 2025 01:48:31 +0000 (0:00:00.665) 0:03:58.193 ******** 2025-04-17 01:56:39.484974 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.484980 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.484985 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.484991 | orchestrator | 2025-04-17 01:56:39.484997 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.485003 | orchestrator | Thursday 17 April 2025 01:48:31 +0000 (0:00:00.380) 0:03:58.574 ******** 2025-04-17 01:56:39.485008 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485014 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485020 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485026 | orchestrator | 2025-04-17 01:56:39.485031 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.485037 | orchestrator | Thursday 17 April 2025 01:48:32 +0000 (0:00:00.369) 0:03:58.943 ******** 2025-04-17 01:56:39.485043 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485049 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485055 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485060 | orchestrator | 2025-04-17 01:56:39.485066 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.485072 | orchestrator | Thursday 17 April 2025 01:48:32 +0000 (0:00:00.374) 0:03:59.318 ******** 2025-04-17 01:56:39.485078 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485084 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485089 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485095 | orchestrator | 2025-04-17 01:56:39.485101 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.485140 | orchestrator | Thursday 17 April 2025 01:48:33 +0000 (0:00:00.577) 0:03:59.896 ******** 2025-04-17 01:56:39.485148 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485154 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485160 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485166 | orchestrator | 2025-04-17 01:56:39.485172 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.485178 | orchestrator | Thursday 17 April 2025 01:48:33 +0000 (0:00:00.478) 0:04:00.374 ******** 2025-04-17 01:56:39.485184 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485189 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485195 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485201 | orchestrator | 2025-04-17 01:56:39.485207 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.485216 | orchestrator | Thursday 17 April 2025 01:48:33 +0000 (0:00:00.310) 0:04:00.684 ******** 2025-04-17 01:56:39.485222 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485228 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485233 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485239 | orchestrator | 2025-04-17 01:56:39.485245 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.485251 | orchestrator | Thursday 17 April 2025 01:48:34 +0000 (0:00:00.323) 0:04:01.008 ******** 2025-04-17 01:56:39.485256 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485266 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485272 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485278 | orchestrator | 2025-04-17 01:56:39.485284 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.485290 | orchestrator | Thursday 17 April 2025 01:48:34 +0000 (0:00:00.444) 0:04:01.452 ******** 2025-04-17 01:56:39.485295 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485301 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485307 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485312 | orchestrator | 2025-04-17 01:56:39.485318 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.485324 | orchestrator | Thursday 17 April 2025 01:48:34 +0000 (0:00:00.268) 0:04:01.721 ******** 2025-04-17 01:56:39.485330 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485339 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485345 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485350 | orchestrator | 2025-04-17 01:56:39.485356 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.485362 | orchestrator | Thursday 17 April 2025 01:48:35 +0000 (0:00:00.241) 0:04:01.963 ******** 2025-04-17 01:56:39.485368 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.485374 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.485379 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.485385 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.485391 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485397 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485403 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.485408 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.485414 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485420 | orchestrator | 2025-04-17 01:56:39.485426 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.485431 | orchestrator | Thursday 17 April 2025 01:48:35 +0000 (0:00:00.320) 0:04:02.283 ******** 2025-04-17 01:56:39.485437 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-17 01:56:39.485443 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-17 01:56:39.485449 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485455 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-17 01:56:39.485460 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-17 01:56:39.485466 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485472 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-17 01:56:39.485478 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-17 01:56:39.485483 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485489 | orchestrator | 2025-04-17 01:56:39.485495 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.485501 | orchestrator | Thursday 17 April 2025 01:48:36 +0000 (0:00:00.487) 0:04:02.771 ******** 2025-04-17 01:56:39.485506 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485512 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485518 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485524 | orchestrator | 2025-04-17 01:56:39.485529 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.485535 | orchestrator | Thursday 17 April 2025 01:48:36 +0000 (0:00:00.347) 0:04:03.119 ******** 2025-04-17 01:56:39.485572 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485579 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485584 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485599 | orchestrator | 2025-04-17 01:56:39.485605 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.485615 | orchestrator | Thursday 17 April 2025 01:48:36 +0000 (0:00:00.291) 0:04:03.410 ******** 2025-04-17 01:56:39.485621 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485627 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485632 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485638 | orchestrator | 2025-04-17 01:56:39.485644 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.485650 | orchestrator | Thursday 17 April 2025 01:48:36 +0000 (0:00:00.284) 0:04:03.695 ******** 2025-04-17 01:56:39.485656 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485662 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485667 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485673 | orchestrator | 2025-04-17 01:56:39.485719 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.485728 | orchestrator | Thursday 17 April 2025 01:48:37 +0000 (0:00:00.449) 0:04:04.144 ******** 2025-04-17 01:56:39.485733 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485739 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485745 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485751 | orchestrator | 2025-04-17 01:56:39.485756 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.485762 | orchestrator | Thursday 17 April 2025 01:48:37 +0000 (0:00:00.266) 0:04:04.410 ******** 2025-04-17 01:56:39.485768 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485774 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485780 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485785 | orchestrator | 2025-04-17 01:56:39.485791 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.485797 | orchestrator | Thursday 17 April 2025 01:48:37 +0000 (0:00:00.311) 0:04:04.722 ******** 2025-04-17 01:56:39.485803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.485809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.485814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.485820 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485826 | orchestrator | 2025-04-17 01:56:39.485831 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.485837 | orchestrator | Thursday 17 April 2025 01:48:38 +0000 (0:00:00.371) 0:04:05.093 ******** 2025-04-17 01:56:39.485843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.485849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.485854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.485860 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485866 | orchestrator | 2025-04-17 01:56:39.485872 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.485877 | orchestrator | Thursday 17 April 2025 01:48:38 +0000 (0:00:00.551) 0:04:05.645 ******** 2025-04-17 01:56:39.485883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.485889 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.485895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.485901 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485906 | orchestrator | 2025-04-17 01:56:39.485912 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.485918 | orchestrator | Thursday 17 April 2025 01:48:39 +0000 (0:00:00.762) 0:04:06.407 ******** 2025-04-17 01:56:39.485924 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485930 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485935 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485941 | orchestrator | 2025-04-17 01:56:39.485947 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.485953 | orchestrator | Thursday 17 April 2025 01:48:39 +0000 (0:00:00.355) 0:04:06.762 ******** 2025-04-17 01:56:39.485963 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.485969 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.485975 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.485981 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.485986 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.485992 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.485998 | orchestrator | 2025-04-17 01:56:39.486004 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.486027 | orchestrator | Thursday 17 April 2025 01:48:40 +0000 (0:00:00.551) 0:04:07.314 ******** 2025-04-17 01:56:39.486035 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486040 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486046 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486052 | orchestrator | 2025-04-17 01:56:39.486057 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.486062 | orchestrator | Thursday 17 April 2025 01:48:40 +0000 (0:00:00.272) 0:04:07.587 ******** 2025-04-17 01:56:39.486068 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486073 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486078 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486083 | orchestrator | 2025-04-17 01:56:39.486088 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.486094 | orchestrator | Thursday 17 April 2025 01:48:41 +0000 (0:00:00.450) 0:04:08.037 ******** 2025-04-17 01:56:39.486099 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.486104 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486110 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.486115 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486120 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.486125 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486130 | orchestrator | 2025-04-17 01:56:39.486136 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.486141 | orchestrator | Thursday 17 April 2025 01:48:41 +0000 (0:00:00.385) 0:04:08.423 ******** 2025-04-17 01:56:39.486146 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486151 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486157 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486162 | orchestrator | 2025-04-17 01:56:39.486167 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.486172 | orchestrator | Thursday 17 April 2025 01:48:41 +0000 (0:00:00.303) 0:04:08.726 ******** 2025-04-17 01:56:39.486177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.486183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.486188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.486193 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486212 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-17 01:56:39.486218 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-17 01:56:39.486223 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-17 01:56:39.486228 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-17 01:56:39.486236 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-17 01:56:39.486242 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486247 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-17 01:56:39.486252 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486258 | orchestrator | 2025-04-17 01:56:39.486280 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.486286 | orchestrator | Thursday 17 April 2025 01:48:42 +0000 (0:00:00.636) 0:04:09.363 ******** 2025-04-17 01:56:39.486296 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486306 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486311 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486316 | orchestrator | 2025-04-17 01:56:39.486322 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-17 01:56:39.486327 | orchestrator | Thursday 17 April 2025 01:48:43 +0000 (0:00:00.458) 0:04:09.822 ******** 2025-04-17 01:56:39.486332 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486338 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486343 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486348 | orchestrator | 2025-04-17 01:56:39.486354 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-17 01:56:39.486360 | orchestrator | Thursday 17 April 2025 01:48:43 +0000 (0:00:00.615) 0:04:10.437 ******** 2025-04-17 01:56:39.486366 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486372 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486377 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486383 | orchestrator | 2025-04-17 01:56:39.486389 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-17 01:56:39.486395 | orchestrator | Thursday 17 April 2025 01:48:44 +0000 (0:00:00.508) 0:04:10.946 ******** 2025-04-17 01:56:39.486401 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486407 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486413 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486418 | orchestrator | 2025-04-17 01:56:39.486424 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-04-17 01:56:39.486430 | orchestrator | Thursday 17 April 2025 01:48:44 +0000 (0:00:00.625) 0:04:11.572 ******** 2025-04-17 01:56:39.486436 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.486442 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.486448 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.486454 | orchestrator | 2025-04-17 01:56:39.486460 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-04-17 01:56:39.486465 | orchestrator | Thursday 17 April 2025 01:48:45 +0000 (0:00:00.303) 0:04:11.875 ******** 2025-04-17 01:56:39.486471 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.486477 | orchestrator | 2025-04-17 01:56:39.486483 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-04-17 01:56:39.486489 | orchestrator | Thursday 17 April 2025 01:48:45 +0000 (0:00:00.525) 0:04:12.401 ******** 2025-04-17 01:56:39.486497 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486503 | orchestrator | 2025-04-17 01:56:39.486509 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-04-17 01:56:39.486515 | orchestrator | Thursday 17 April 2025 01:48:45 +0000 (0:00:00.136) 0:04:12.537 ******** 2025-04-17 01:56:39.486521 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-17 01:56:39.486527 | orchestrator | 2025-04-17 01:56:39.486532 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-04-17 01:56:39.486550 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:00.708) 0:04:13.245 ******** 2025-04-17 01:56:39.486556 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.486562 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.486568 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.486574 | orchestrator | 2025-04-17 01:56:39.486580 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-04-17 01:56:39.486586 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:00.250) 0:04:13.495 ******** 2025-04-17 01:56:39.486591 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.486597 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.486603 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.486609 | orchestrator | 2025-04-17 01:56:39.486615 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-04-17 01:56:39.486620 | orchestrator | Thursday 17 April 2025 01:48:46 +0000 (0:00:00.243) 0:04:13.739 ******** 2025-04-17 01:56:39.486630 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.486635 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.486641 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.486647 | orchestrator | 2025-04-17 01:56:39.486653 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-04-17 01:56:39.486661 | orchestrator | Thursday 17 April 2025 01:48:48 +0000 (0:00:01.102) 0:04:14.842 ******** 2025-04-17 01:56:39.486667 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.486673 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.486679 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.486685 | orchestrator | 2025-04-17 01:56:39.486691 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-04-17 01:56:39.486697 | orchestrator | Thursday 17 April 2025 01:48:49 +0000 (0:00:00.970) 0:04:15.812 ******** 2025-04-17 01:56:39.486703 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.486709 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.486714 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.486720 | orchestrator | 2025-04-17 01:56:39.486725 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-04-17 01:56:39.486730 | orchestrator | Thursday 17 April 2025 01:48:49 +0000 (0:00:00.596) 0:04:16.409 ******** 2025-04-17 01:56:39.486736 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.486741 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.486746 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.486751 | orchestrator | 2025-04-17 01:56:39.486771 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-04-17 01:56:39.486777 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:00.617) 0:04:17.026 ******** 2025-04-17 01:56:39.486782 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486788 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486793 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486798 | orchestrator | 2025-04-17 01:56:39.486803 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-04-17 01:56:39.486809 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:00.285) 0:04:17.312 ******** 2025-04-17 01:56:39.486814 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.486819 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.486824 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.486829 | orchestrator | 2025-04-17 01:56:39.486834 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-04-17 01:56:39.486840 | orchestrator | Thursday 17 April 2025 01:48:50 +0000 (0:00:00.454) 0:04:17.766 ******** 2025-04-17 01:56:39.486845 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486850 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486855 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486860 | orchestrator | 2025-04-17 01:56:39.486866 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-04-17 01:56:39.486871 | orchestrator | Thursday 17 April 2025 01:48:51 +0000 (0:00:00.305) 0:04:18.072 ******** 2025-04-17 01:56:39.486876 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.486881 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.486886 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.486892 | orchestrator | 2025-04-17 01:56:39.486897 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-04-17 01:56:39.486902 | orchestrator | Thursday 17 April 2025 01:48:51 +0000 (0:00:00.274) 0:04:18.346 ******** 2025-04-17 01:56:39.486907 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.486912 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.486917 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.486922 | orchestrator | 2025-04-17 01:56:39.486928 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-04-17 01:56:39.486933 | orchestrator | Thursday 17 April 2025 01:48:52 +0000 (0:00:01.313) 0:04:19.660 ******** 2025-04-17 01:56:39.486938 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.486948 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.486953 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.486958 | orchestrator | 2025-04-17 01:56:39.486964 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-04-17 01:56:39.486969 | orchestrator | Thursday 17 April 2025 01:48:53 +0000 (0:00:00.296) 0:04:19.956 ******** 2025-04-17 01:56:39.486977 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.486982 | orchestrator | 2025-04-17 01:56:39.486987 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-04-17 01:56:39.486993 | orchestrator | Thursday 17 April 2025 01:48:53 +0000 (0:00:00.501) 0:04:20.458 ******** 2025-04-17 01:56:39.486998 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487006 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487012 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487017 | orchestrator | 2025-04-17 01:56:39.487022 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-04-17 01:56:39.487028 | orchestrator | Thursday 17 April 2025 01:48:54 +0000 (0:00:00.431) 0:04:20.889 ******** 2025-04-17 01:56:39.487033 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487038 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487043 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487049 | orchestrator | 2025-04-17 01:56:39.487054 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-04-17 01:56:39.487059 | orchestrator | Thursday 17 April 2025 01:48:54 +0000 (0:00:00.285) 0:04:21.175 ******** 2025-04-17 01:56:39.487064 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.487069 | orchestrator | 2025-04-17 01:56:39.487075 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-04-17 01:56:39.487080 | orchestrator | Thursday 17 April 2025 01:48:54 +0000 (0:00:00.536) 0:04:21.711 ******** 2025-04-17 01:56:39.487085 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.487090 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.487095 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.487101 | orchestrator | 2025-04-17 01:56:39.487106 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-04-17 01:56:39.487111 | orchestrator | Thursday 17 April 2025 01:48:56 +0000 (0:00:01.236) 0:04:22.948 ******** 2025-04-17 01:56:39.487116 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.487121 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.487126 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.487132 | orchestrator | 2025-04-17 01:56:39.487137 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-04-17 01:56:39.487142 | orchestrator | Thursday 17 April 2025 01:48:57 +0000 (0:00:01.193) 0:04:24.142 ******** 2025-04-17 01:56:39.487147 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.487152 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.487158 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.487163 | orchestrator | 2025-04-17 01:56:39.487168 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-04-17 01:56:39.487176 | orchestrator | Thursday 17 April 2025 01:48:59 +0000 (0:00:01.745) 0:04:25.887 ******** 2025-04-17 01:56:39.487181 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.487186 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.487191 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.487197 | orchestrator | 2025-04-17 01:56:39.487202 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-04-17 01:56:39.487207 | orchestrator | Thursday 17 April 2025 01:49:01 +0000 (0:00:02.010) 0:04:27.898 ******** 2025-04-17 01:56:39.487213 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.487218 | orchestrator | 2025-04-17 01:56:39.487235 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-04-17 01:56:39.487245 | orchestrator | Thursday 17 April 2025 01:49:01 +0000 (0:00:00.636) 0:04:28.534 ******** 2025-04-17 01:56:39.487250 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-04-17 01:56:39.487255 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.487260 | orchestrator | 2025-04-17 01:56:39.487266 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-04-17 01:56:39.487271 | orchestrator | Thursday 17 April 2025 01:49:23 +0000 (0:00:21.483) 0:04:50.018 ******** 2025-04-17 01:56:39.487276 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.487281 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.487287 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.487292 | orchestrator | 2025-04-17 01:56:39.487297 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-04-17 01:56:39.487302 | orchestrator | Thursday 17 April 2025 01:49:30 +0000 (0:00:07.350) 0:04:57.368 ******** 2025-04-17 01:56:39.487308 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487313 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487318 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487324 | orchestrator | 2025-04-17 01:56:39.487329 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-17 01:56:39.487334 | orchestrator | Thursday 17 April 2025 01:49:31 +0000 (0:00:01.194) 0:04:58.563 ******** 2025-04-17 01:56:39.487339 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.487345 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.487350 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.487355 | orchestrator | 2025-04-17 01:56:39.487360 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-17 01:56:39.487365 | orchestrator | Thursday 17 April 2025 01:49:32 +0000 (0:00:00.719) 0:04:59.282 ******** 2025-04-17 01:56:39.487371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.487376 | orchestrator | 2025-04-17 01:56:39.487381 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-17 01:56:39.487386 | orchestrator | Thursday 17 April 2025 01:49:33 +0000 (0:00:00.720) 0:05:00.003 ******** 2025-04-17 01:56:39.487392 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.487397 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.487402 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.487407 | orchestrator | 2025-04-17 01:56:39.487412 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-17 01:56:39.487418 | orchestrator | Thursday 17 April 2025 01:49:33 +0000 (0:00:00.354) 0:05:00.358 ******** 2025-04-17 01:56:39.487423 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.487428 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.487433 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.487439 | orchestrator | 2025-04-17 01:56:39.487444 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-17 01:56:39.487449 | orchestrator | Thursday 17 April 2025 01:49:34 +0000 (0:00:01.209) 0:05:01.567 ******** 2025-04-17 01:56:39.487454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.487460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.487465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.487470 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487476 | orchestrator | 2025-04-17 01:56:39.487481 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-17 01:56:39.487486 | orchestrator | Thursday 17 April 2025 01:49:35 +0000 (0:00:01.109) 0:05:02.677 ******** 2025-04-17 01:56:39.487491 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.487496 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.487502 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.487507 | orchestrator | 2025-04-17 01:56:39.487512 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.487521 | orchestrator | Thursday 17 April 2025 01:49:36 +0000 (0:00:00.396) 0:05:03.073 ******** 2025-04-17 01:56:39.487526 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.487532 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.487548 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.487554 | orchestrator | 2025-04-17 01:56:39.487559 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-04-17 01:56:39.487564 | orchestrator | 2025-04-17 01:56:39.487569 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-17 01:56:39.487575 | orchestrator | Thursday 17 April 2025 01:49:38 +0000 (0:00:02.125) 0:05:05.198 ******** 2025-04-17 01:56:39.487580 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.487585 | orchestrator | 2025-04-17 01:56:39.487590 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-17 01:56:39.487596 | orchestrator | Thursday 17 April 2025 01:49:39 +0000 (0:00:00.680) 0:05:05.878 ******** 2025-04-17 01:56:39.487601 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.487606 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.487611 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.487617 | orchestrator | 2025-04-17 01:56:39.487622 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-17 01:56:39.487627 | orchestrator | Thursday 17 April 2025 01:49:39 +0000 (0:00:00.702) 0:05:06.581 ******** 2025-04-17 01:56:39.487632 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487637 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487643 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487648 | orchestrator | 2025-04-17 01:56:39.487653 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-17 01:56:39.487658 | orchestrator | Thursday 17 April 2025 01:49:40 +0000 (0:00:00.324) 0:05:06.906 ******** 2025-04-17 01:56:39.487664 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487669 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487674 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487679 | orchestrator | 2025-04-17 01:56:39.487701 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-17 01:56:39.487708 | orchestrator | Thursday 17 April 2025 01:49:40 +0000 (0:00:00.610) 0:05:07.516 ******** 2025-04-17 01:56:39.487713 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487721 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487727 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487732 | orchestrator | 2025-04-17 01:56:39.487737 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-17 01:56:39.487743 | orchestrator | Thursday 17 April 2025 01:49:41 +0000 (0:00:00.340) 0:05:07.857 ******** 2025-04-17 01:56:39.487748 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.487753 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.487758 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.487763 | orchestrator | 2025-04-17 01:56:39.487769 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-17 01:56:39.487774 | orchestrator | Thursday 17 April 2025 01:49:41 +0000 (0:00:00.718) 0:05:08.576 ******** 2025-04-17 01:56:39.487779 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487785 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487790 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487795 | orchestrator | 2025-04-17 01:56:39.487800 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-17 01:56:39.487805 | orchestrator | Thursday 17 April 2025 01:49:42 +0000 (0:00:00.306) 0:05:08.883 ******** 2025-04-17 01:56:39.487811 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487816 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487821 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487826 | orchestrator | 2025-04-17 01:56:39.487835 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-17 01:56:39.487840 | orchestrator | Thursday 17 April 2025 01:49:42 +0000 (0:00:00.569) 0:05:09.452 ******** 2025-04-17 01:56:39.487845 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487850 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487856 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487861 | orchestrator | 2025-04-17 01:56:39.487866 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-17 01:56:39.487871 | orchestrator | Thursday 17 April 2025 01:49:43 +0000 (0:00:00.341) 0:05:09.794 ******** 2025-04-17 01:56:39.487877 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487882 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487887 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487892 | orchestrator | 2025-04-17 01:56:39.487898 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-17 01:56:39.487903 | orchestrator | Thursday 17 April 2025 01:49:43 +0000 (0:00:00.338) 0:05:10.133 ******** 2025-04-17 01:56:39.487908 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487913 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487918 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487923 | orchestrator | 2025-04-17 01:56:39.487929 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-17 01:56:39.487934 | orchestrator | Thursday 17 April 2025 01:49:43 +0000 (0:00:00.342) 0:05:10.475 ******** 2025-04-17 01:56:39.487939 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.487945 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.487950 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.487955 | orchestrator | 2025-04-17 01:56:39.487960 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-17 01:56:39.487966 | orchestrator | Thursday 17 April 2025 01:49:44 +0000 (0:00:01.062) 0:05:11.538 ******** 2025-04-17 01:56:39.487971 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.487976 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.487981 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.487986 | orchestrator | 2025-04-17 01:56:39.487992 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-17 01:56:39.487997 | orchestrator | Thursday 17 April 2025 01:49:45 +0000 (0:00:00.324) 0:05:11.863 ******** 2025-04-17 01:56:39.488002 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.488007 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.488012 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.488018 | orchestrator | 2025-04-17 01:56:39.488023 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-17 01:56:39.488028 | orchestrator | Thursday 17 April 2025 01:49:45 +0000 (0:00:00.335) 0:05:12.198 ******** 2025-04-17 01:56:39.488033 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488038 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488044 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488049 | orchestrator | 2025-04-17 01:56:39.488054 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-17 01:56:39.488059 | orchestrator | Thursday 17 April 2025 01:49:45 +0000 (0:00:00.343) 0:05:12.541 ******** 2025-04-17 01:56:39.488064 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488070 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488075 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488080 | orchestrator | 2025-04-17 01:56:39.488085 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-17 01:56:39.488091 | orchestrator | Thursday 17 April 2025 01:49:46 +0000 (0:00:00.685) 0:05:13.227 ******** 2025-04-17 01:56:39.488096 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488101 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488106 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488111 | orchestrator | 2025-04-17 01:56:39.488117 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-17 01:56:39.488125 | orchestrator | Thursday 17 April 2025 01:49:46 +0000 (0:00:00.279) 0:05:13.507 ******** 2025-04-17 01:56:39.488130 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488135 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488141 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488146 | orchestrator | 2025-04-17 01:56:39.488151 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-17 01:56:39.488156 | orchestrator | Thursday 17 April 2025 01:49:47 +0000 (0:00:00.276) 0:05:13.783 ******** 2025-04-17 01:56:39.488162 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488167 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488172 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488177 | orchestrator | 2025-04-17 01:56:39.488195 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-17 01:56:39.488205 | orchestrator | Thursday 17 April 2025 01:49:47 +0000 (0:00:00.317) 0:05:14.101 ******** 2025-04-17 01:56:39.488210 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.488215 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.488221 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.488226 | orchestrator | 2025-04-17 01:56:39.488231 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-17 01:56:39.488237 | orchestrator | Thursday 17 April 2025 01:49:47 +0000 (0:00:00.470) 0:05:14.571 ******** 2025-04-17 01:56:39.488242 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.488247 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.488252 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.488258 | orchestrator | 2025-04-17 01:56:39.488263 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.488268 | orchestrator | Thursday 17 April 2025 01:49:48 +0000 (0:00:00.292) 0:05:14.863 ******** 2025-04-17 01:56:39.488273 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488282 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488288 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488293 | orchestrator | 2025-04-17 01:56:39.488298 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.488303 | orchestrator | Thursday 17 April 2025 01:49:48 +0000 (0:00:00.288) 0:05:15.151 ******** 2025-04-17 01:56:39.488309 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488314 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488319 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488324 | orchestrator | 2025-04-17 01:56:39.488330 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.488335 | orchestrator | Thursday 17 April 2025 01:49:48 +0000 (0:00:00.282) 0:05:15.434 ******** 2025-04-17 01:56:39.488340 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488345 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488350 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488356 | orchestrator | 2025-04-17 01:56:39.488361 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.488366 | orchestrator | Thursday 17 April 2025 01:49:49 +0000 (0:00:00.456) 0:05:15.890 ******** 2025-04-17 01:56:39.488371 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488377 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488382 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488387 | orchestrator | 2025-04-17 01:56:39.488392 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.488398 | orchestrator | Thursday 17 April 2025 01:49:49 +0000 (0:00:00.288) 0:05:16.178 ******** 2025-04-17 01:56:39.488403 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488408 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488413 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488418 | orchestrator | 2025-04-17 01:56:39.488424 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.488429 | orchestrator | Thursday 17 April 2025 01:49:49 +0000 (0:00:00.328) 0:05:16.507 ******** 2025-04-17 01:56:39.488437 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488443 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488448 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488453 | orchestrator | 2025-04-17 01:56:39.488458 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.488464 | orchestrator | Thursday 17 April 2025 01:49:50 +0000 (0:00:00.265) 0:05:16.772 ******** 2025-04-17 01:56:39.488469 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488474 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488479 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488484 | orchestrator | 2025-04-17 01:56:39.488489 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.488495 | orchestrator | Thursday 17 April 2025 01:49:50 +0000 (0:00:00.508) 0:05:17.280 ******** 2025-04-17 01:56:39.488500 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488505 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488511 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488516 | orchestrator | 2025-04-17 01:56:39.488521 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.488526 | orchestrator | Thursday 17 April 2025 01:49:50 +0000 (0:00:00.292) 0:05:17.572 ******** 2025-04-17 01:56:39.488532 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488565 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488572 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488577 | orchestrator | 2025-04-17 01:56:39.488583 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.488588 | orchestrator | Thursday 17 April 2025 01:49:51 +0000 (0:00:00.295) 0:05:17.868 ******** 2025-04-17 01:56:39.488593 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488599 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488604 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488609 | orchestrator | 2025-04-17 01:56:39.488614 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.488619 | orchestrator | Thursday 17 April 2025 01:49:51 +0000 (0:00:00.582) 0:05:18.451 ******** 2025-04-17 01:56:39.488625 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488630 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488635 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488641 | orchestrator | 2025-04-17 01:56:39.488646 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.488651 | orchestrator | Thursday 17 April 2025 01:49:52 +0000 (0:00:00.340) 0:05:18.791 ******** 2025-04-17 01:56:39.488656 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488662 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488667 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488672 | orchestrator | 2025-04-17 01:56:39.488677 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.488683 | orchestrator | Thursday 17 April 2025 01:49:52 +0000 (0:00:00.344) 0:05:19.135 ******** 2025-04-17 01:56:39.488703 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.488710 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.488715 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488721 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.488726 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.488731 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488737 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.488742 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.488747 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488752 | orchestrator | 2025-04-17 01:56:39.488758 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.488767 | orchestrator | Thursday 17 April 2025 01:49:52 +0000 (0:00:00.371) 0:05:19.507 ******** 2025-04-17 01:56:39.488773 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-17 01:56:39.488778 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-17 01:56:39.488783 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488788 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-17 01:56:39.488793 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-17 01:56:39.488799 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488804 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-17 01:56:39.488809 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-17 01:56:39.488814 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488820 | orchestrator | 2025-04-17 01:56:39.488825 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.488830 | orchestrator | Thursday 17 April 2025 01:49:53 +0000 (0:00:00.601) 0:05:20.109 ******** 2025-04-17 01:56:39.488835 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488840 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488846 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488851 | orchestrator | 2025-04-17 01:56:39.488856 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.488861 | orchestrator | Thursday 17 April 2025 01:49:53 +0000 (0:00:00.340) 0:05:20.450 ******** 2025-04-17 01:56:39.488866 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488872 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488877 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488882 | orchestrator | 2025-04-17 01:56:39.488887 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.488893 | orchestrator | Thursday 17 April 2025 01:49:54 +0000 (0:00:00.347) 0:05:20.798 ******** 2025-04-17 01:56:39.488898 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488903 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488908 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488913 | orchestrator | 2025-04-17 01:56:39.488922 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.488927 | orchestrator | Thursday 17 April 2025 01:49:54 +0000 (0:00:00.359) 0:05:21.157 ******** 2025-04-17 01:56:39.488932 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488940 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488945 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488951 | orchestrator | 2025-04-17 01:56:39.488956 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.488961 | orchestrator | Thursday 17 April 2025 01:49:54 +0000 (0:00:00.565) 0:05:21.723 ******** 2025-04-17 01:56:39.488966 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.488972 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.488977 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.488982 | orchestrator | 2025-04-17 01:56:39.488988 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.488993 | orchestrator | Thursday 17 April 2025 01:49:55 +0000 (0:00:00.338) 0:05:22.062 ******** 2025-04-17 01:56:39.488998 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489003 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489008 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489014 | orchestrator | 2025-04-17 01:56:39.489019 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.489024 | orchestrator | Thursday 17 April 2025 01:49:55 +0000 (0:00:00.366) 0:05:22.429 ******** 2025-04-17 01:56:39.489030 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.489035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.489068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.489074 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489080 | orchestrator | 2025-04-17 01:56:39.489085 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.489090 | orchestrator | Thursday 17 April 2025 01:49:56 +0000 (0:00:00.422) 0:05:22.852 ******** 2025-04-17 01:56:39.489096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.489101 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.489106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.489111 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489116 | orchestrator | 2025-04-17 01:56:39.489122 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.489127 | orchestrator | Thursday 17 April 2025 01:49:56 +0000 (0:00:00.429) 0:05:23.281 ******** 2025-04-17 01:56:39.489132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.489137 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.489142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.489147 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489151 | orchestrator | 2025-04-17 01:56:39.489156 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.489179 | orchestrator | Thursday 17 April 2025 01:49:57 +0000 (0:00:00.677) 0:05:23.958 ******** 2025-04-17 01:56:39.489185 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489189 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489194 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489199 | orchestrator | 2025-04-17 01:56:39.489204 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.489209 | orchestrator | Thursday 17 April 2025 01:49:57 +0000 (0:00:00.533) 0:05:24.492 ******** 2025-04-17 01:56:39.489213 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.489218 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489223 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.489228 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489233 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.489237 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489242 | orchestrator | 2025-04-17 01:56:39.489247 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.489252 | orchestrator | Thursday 17 April 2025 01:49:58 +0000 (0:00:00.478) 0:05:24.970 ******** 2025-04-17 01:56:39.489256 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489261 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489266 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489271 | orchestrator | 2025-04-17 01:56:39.489276 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.489281 | orchestrator | Thursday 17 April 2025 01:49:58 +0000 (0:00:00.339) 0:05:25.310 ******** 2025-04-17 01:56:39.489285 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489290 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489295 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489300 | orchestrator | 2025-04-17 01:56:39.489305 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.489309 | orchestrator | Thursday 17 April 2025 01:49:58 +0000 (0:00:00.375) 0:05:25.685 ******** 2025-04-17 01:56:39.489314 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.489319 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.489324 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489329 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489333 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.489338 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489347 | orchestrator | 2025-04-17 01:56:39.489352 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.489357 | orchestrator | Thursday 17 April 2025 01:49:59 +0000 (0:00:00.929) 0:05:26.615 ******** 2025-04-17 01:56:39.489361 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489366 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489371 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489376 | orchestrator | 2025-04-17 01:56:39.489381 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.489386 | orchestrator | Thursday 17 April 2025 01:50:00 +0000 (0:00:00.362) 0:05:26.978 ******** 2025-04-17 01:56:39.489390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.489395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.489400 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-17 01:56:39.489405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.489410 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489417 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-17 01:56:39.489423 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-17 01:56:39.489427 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489432 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-17 01:56:39.489437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-17 01:56:39.489442 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-17 01:56:39.489447 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489452 | orchestrator | 2025-04-17 01:56:39.489457 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.489462 | orchestrator | Thursday 17 April 2025 01:50:00 +0000 (0:00:00.786) 0:05:27.764 ******** 2025-04-17 01:56:39.489466 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489471 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489476 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489481 | orchestrator | 2025-04-17 01:56:39.489486 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-17 01:56:39.489490 | orchestrator | Thursday 17 April 2025 01:50:01 +0000 (0:00:00.929) 0:05:28.694 ******** 2025-04-17 01:56:39.489495 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489500 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489505 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489510 | orchestrator | 2025-04-17 01:56:39.489514 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-17 01:56:39.489519 | orchestrator | Thursday 17 April 2025 01:50:02 +0000 (0:00:00.601) 0:05:29.295 ******** 2025-04-17 01:56:39.489524 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489529 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489534 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489552 | orchestrator | 2025-04-17 01:56:39.489559 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-17 01:56:39.489564 | orchestrator | Thursday 17 April 2025 01:50:03 +0000 (0:00:00.885) 0:05:30.181 ******** 2025-04-17 01:56:39.489569 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489574 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489579 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489583 | orchestrator | 2025-04-17 01:56:39.489588 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-04-17 01:56:39.489593 | orchestrator | Thursday 17 April 2025 01:50:04 +0000 (0:00:00.666) 0:05:30.848 ******** 2025-04-17 01:56:39.489598 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:39.489615 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:56:39.489621 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:56:39.489629 | orchestrator | 2025-04-17 01:56:39.489634 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-04-17 01:56:39.489639 | orchestrator | Thursday 17 April 2025 01:50:05 +0000 (0:00:01.064) 0:05:31.912 ******** 2025-04-17 01:56:39.489644 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.489648 | orchestrator | 2025-04-17 01:56:39.489653 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-04-17 01:56:39.489658 | orchestrator | Thursday 17 April 2025 01:50:05 +0000 (0:00:00.609) 0:05:32.521 ******** 2025-04-17 01:56:39.489663 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.489668 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.489672 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.489677 | orchestrator | 2025-04-17 01:56:39.489682 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-04-17 01:56:39.489687 | orchestrator | Thursday 17 April 2025 01:50:06 +0000 (0:00:00.717) 0:05:33.239 ******** 2025-04-17 01:56:39.489691 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489696 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489701 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489706 | orchestrator | 2025-04-17 01:56:39.489711 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-04-17 01:56:39.489715 | orchestrator | Thursday 17 April 2025 01:50:07 +0000 (0:00:00.640) 0:05:33.879 ******** 2025-04-17 01:56:39.489720 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-17 01:56:39.489725 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-17 01:56:39.489730 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-17 01:56:39.489735 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-04-17 01:56:39.489739 | orchestrator | 2025-04-17 01:56:39.489744 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-04-17 01:56:39.489749 | orchestrator | Thursday 17 April 2025 01:50:15 +0000 (0:00:08.107) 0:05:41.987 ******** 2025-04-17 01:56:39.489754 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.489759 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.489766 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.489771 | orchestrator | 2025-04-17 01:56:39.489776 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-04-17 01:56:39.489781 | orchestrator | Thursday 17 April 2025 01:50:15 +0000 (0:00:00.499) 0:05:42.487 ******** 2025-04-17 01:56:39.489786 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-17 01:56:39.489791 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-17 01:56:39.489796 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-17 01:56:39.489801 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-17 01:56:39.489806 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:56:39.489811 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:56:39.489815 | orchestrator | 2025-04-17 01:56:39.489820 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-04-17 01:56:39.489825 | orchestrator | Thursday 17 April 2025 01:50:17 +0000 (0:00:01.670) 0:05:44.157 ******** 2025-04-17 01:56:39.489830 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-17 01:56:39.489835 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-17 01:56:39.489839 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-17 01:56:39.489844 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-17 01:56:39.489849 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-17 01:56:39.489854 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-17 01:56:39.489858 | orchestrator | 2025-04-17 01:56:39.489863 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-04-17 01:56:39.489868 | orchestrator | Thursday 17 April 2025 01:50:18 +0000 (0:00:01.206) 0:05:45.364 ******** 2025-04-17 01:56:39.489876 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.489881 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.489885 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.489890 | orchestrator | 2025-04-17 01:56:39.489895 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-04-17 01:56:39.489900 | orchestrator | Thursday 17 April 2025 01:50:19 +0000 (0:00:00.938) 0:05:46.302 ******** 2025-04-17 01:56:39.489905 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489909 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489914 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489919 | orchestrator | 2025-04-17 01:56:39.489924 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-04-17 01:56:39.489929 | orchestrator | Thursday 17 April 2025 01:50:19 +0000 (0:00:00.317) 0:05:46.620 ******** 2025-04-17 01:56:39.489933 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489938 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489943 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489948 | orchestrator | 2025-04-17 01:56:39.489952 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-04-17 01:56:39.489957 | orchestrator | Thursday 17 April 2025 01:50:20 +0000 (0:00:00.331) 0:05:46.951 ******** 2025-04-17 01:56:39.489962 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.489967 | orchestrator | 2025-04-17 01:56:39.489972 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-04-17 01:56:39.489976 | orchestrator | Thursday 17 April 2025 01:50:20 +0000 (0:00:00.726) 0:05:47.678 ******** 2025-04-17 01:56:39.489981 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.489986 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.489991 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.489995 | orchestrator | 2025-04-17 01:56:39.490003 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-04-17 01:56:39.490042 | orchestrator | Thursday 17 April 2025 01:50:21 +0000 (0:00:00.344) 0:05:48.022 ******** 2025-04-17 01:56:39.490049 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.490054 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.490059 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.490064 | orchestrator | 2025-04-17 01:56:39.490069 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-04-17 01:56:39.490073 | orchestrator | Thursday 17 April 2025 01:50:21 +0000 (0:00:00.344) 0:05:48.367 ******** 2025-04-17 01:56:39.490078 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.490083 | orchestrator | 2025-04-17 01:56:39.490088 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-04-17 01:56:39.490093 | orchestrator | Thursday 17 April 2025 01:50:22 +0000 (0:00:00.818) 0:05:49.185 ******** 2025-04-17 01:56:39.490097 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.490102 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.490107 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.490112 | orchestrator | 2025-04-17 01:56:39.490116 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-04-17 01:56:39.490121 | orchestrator | Thursday 17 April 2025 01:50:23 +0000 (0:00:01.145) 0:05:50.330 ******** 2025-04-17 01:56:39.490126 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.490131 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.490135 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.490140 | orchestrator | 2025-04-17 01:56:39.490145 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-04-17 01:56:39.490150 | orchestrator | Thursday 17 April 2025 01:50:24 +0000 (0:00:01.102) 0:05:51.433 ******** 2025-04-17 01:56:39.490155 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.490159 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.490168 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.490173 | orchestrator | 2025-04-17 01:56:39.490177 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-04-17 01:56:39.490182 | orchestrator | Thursday 17 April 2025 01:50:26 +0000 (0:00:01.678) 0:05:53.112 ******** 2025-04-17 01:56:39.490187 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.490192 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.490197 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.490201 | orchestrator | 2025-04-17 01:56:39.490206 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-04-17 01:56:39.490211 | orchestrator | Thursday 17 April 2025 01:50:28 +0000 (0:00:02.338) 0:05:55.450 ******** 2025-04-17 01:56:39.490216 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.490220 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.490225 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-04-17 01:56:39.490230 | orchestrator | 2025-04-17 01:56:39.490235 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-04-17 01:56:39.490240 | orchestrator | Thursday 17 April 2025 01:50:29 +0000 (0:00:00.597) 0:05:56.047 ******** 2025-04-17 01:56:39.490244 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-04-17 01:56:39.490249 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-04-17 01:56:39.490254 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-17 01:56:39.490259 | orchestrator | 2025-04-17 01:56:39.490264 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-04-17 01:56:39.490269 | orchestrator | Thursday 17 April 2025 01:50:42 +0000 (0:00:13.463) 0:06:09.511 ******** 2025-04-17 01:56:39.490274 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-17 01:56:39.490278 | orchestrator | 2025-04-17 01:56:39.490283 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-04-17 01:56:39.490288 | orchestrator | Thursday 17 April 2025 01:50:44 +0000 (0:00:01.695) 0:06:11.206 ******** 2025-04-17 01:56:39.490293 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.490297 | orchestrator | 2025-04-17 01:56:39.490302 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-04-17 01:56:39.490307 | orchestrator | Thursday 17 April 2025 01:50:44 +0000 (0:00:00.462) 0:06:11.668 ******** 2025-04-17 01:56:39.490312 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.490316 | orchestrator | 2025-04-17 01:56:39.490321 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-04-17 01:56:39.490326 | orchestrator | Thursday 17 April 2025 01:50:45 +0000 (0:00:00.306) 0:06:11.975 ******** 2025-04-17 01:56:39.490331 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-04-17 01:56:39.490335 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-04-17 01:56:39.490340 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-04-17 01:56:39.490345 | orchestrator | 2025-04-17 01:56:39.490350 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-04-17 01:56:39.490355 | orchestrator | Thursday 17 April 2025 01:50:51 +0000 (0:00:06.492) 0:06:18.468 ******** 2025-04-17 01:56:39.490359 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-04-17 01:56:39.490364 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-04-17 01:56:39.490373 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-04-17 01:56:39.490378 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-04-17 01:56:39.490383 | orchestrator | 2025-04-17 01:56:39.490388 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-17 01:56:39.490393 | orchestrator | Thursday 17 April 2025 01:50:56 +0000 (0:00:04.840) 0:06:23.308 ******** 2025-04-17 01:56:39.490400 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.490405 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.490410 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.490415 | orchestrator | 2025-04-17 01:56:39.490433 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-17 01:56:39.490439 | orchestrator | Thursday 17 April 2025 01:50:57 +0000 (0:00:00.681) 0:06:23.990 ******** 2025-04-17 01:56:39.490444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:39.490448 | orchestrator | 2025-04-17 01:56:39.490453 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-17 01:56:39.490458 | orchestrator | Thursday 17 April 2025 01:50:57 +0000 (0:00:00.772) 0:06:24.763 ******** 2025-04-17 01:56:39.490463 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.490468 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.490473 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.490477 | orchestrator | 2025-04-17 01:56:39.490482 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-17 01:56:39.490487 | orchestrator | Thursday 17 April 2025 01:50:58 +0000 (0:00:00.322) 0:06:25.085 ******** 2025-04-17 01:56:39.490492 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.490497 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.490501 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.490506 | orchestrator | 2025-04-17 01:56:39.490511 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-17 01:56:39.490516 | orchestrator | Thursday 17 April 2025 01:50:59 +0000 (0:00:01.407) 0:06:26.493 ******** 2025-04-17 01:56:39.490521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:56:39.490526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:56:39.490530 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:56:39.490535 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.490553 | orchestrator | 2025-04-17 01:56:39.490558 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-17 01:56:39.490563 | orchestrator | Thursday 17 April 2025 01:51:00 +0000 (0:00:00.640) 0:06:27.133 ******** 2025-04-17 01:56:39.490568 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.490572 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.490577 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.490582 | orchestrator | 2025-04-17 01:56:39.490587 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.490592 | orchestrator | Thursday 17 April 2025 01:51:00 +0000 (0:00:00.318) 0:06:27.452 ******** 2025-04-17 01:56:39.490596 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.490601 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.490606 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.490611 | orchestrator | 2025-04-17 01:56:39.490615 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-04-17 01:56:39.490620 | orchestrator | 2025-04-17 01:56:39.490625 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-17 01:56:39.490630 | orchestrator | Thursday 17 April 2025 01:51:02 +0000 (0:00:01.918) 0:06:29.371 ******** 2025-04-17 01:56:39.490635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.490639 | orchestrator | 2025-04-17 01:56:39.490644 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-17 01:56:39.490649 | orchestrator | Thursday 17 April 2025 01:51:03 +0000 (0:00:00.587) 0:06:29.959 ******** 2025-04-17 01:56:39.490654 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.490661 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.490666 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.490671 | orchestrator | 2025-04-17 01:56:39.490676 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-17 01:56:39.490685 | orchestrator | Thursday 17 April 2025 01:51:03 +0000 (0:00:00.263) 0:06:30.222 ******** 2025-04-17 01:56:39.490690 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.490695 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.490700 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.490705 | orchestrator | 2025-04-17 01:56:39.490709 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-17 01:56:39.490714 | orchestrator | Thursday 17 April 2025 01:51:04 +0000 (0:00:00.655) 0:06:30.878 ******** 2025-04-17 01:56:39.490719 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.490724 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.490729 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.490733 | orchestrator | 2025-04-17 01:56:39.490738 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-17 01:56:39.490743 | orchestrator | Thursday 17 April 2025 01:51:04 +0000 (0:00:00.856) 0:06:31.735 ******** 2025-04-17 01:56:39.490748 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.490753 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.490757 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.490762 | orchestrator | 2025-04-17 01:56:39.490767 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-17 01:56:39.490772 | orchestrator | Thursday 17 April 2025 01:51:05 +0000 (0:00:00.679) 0:06:32.414 ******** 2025-04-17 01:56:39.490777 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.490781 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.490786 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.490791 | orchestrator | 2025-04-17 01:56:39.490796 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-17 01:56:39.490800 | orchestrator | Thursday 17 April 2025 01:51:05 +0000 (0:00:00.313) 0:06:32.728 ******** 2025-04-17 01:56:39.490805 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.490810 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.490815 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.490820 | orchestrator | 2025-04-17 01:56:39.490824 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-17 01:56:39.490829 | orchestrator | Thursday 17 April 2025 01:51:06 +0000 (0:00:00.297) 0:06:33.025 ******** 2025-04-17 01:56:39.490834 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.490839 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.490844 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.490848 | orchestrator | 2025-04-17 01:56:39.490857 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-17 01:56:39.490874 | orchestrator | Thursday 17 April 2025 01:51:06 +0000 (0:00:00.613) 0:06:33.639 ******** 2025-04-17 01:56:39.490880 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.490885 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.490889 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.490894 | orchestrator | 2025-04-17 01:56:39.490899 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-17 01:56:39.490904 | orchestrator | Thursday 17 April 2025 01:51:07 +0000 (0:00:00.313) 0:06:33.952 ******** 2025-04-17 01:56:39.490909 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.490913 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.490918 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.490923 | orchestrator | 2025-04-17 01:56:39.490928 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-17 01:56:39.490933 | orchestrator | Thursday 17 April 2025 01:51:07 +0000 (0:00:00.306) 0:06:34.259 ******** 2025-04-17 01:56:39.490937 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.490942 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.490947 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.490951 | orchestrator | 2025-04-17 01:56:39.490956 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-17 01:56:39.490961 | orchestrator | Thursday 17 April 2025 01:51:07 +0000 (0:00:00.317) 0:06:34.576 ******** 2025-04-17 01:56:39.490984 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.490989 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.490994 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.490999 | orchestrator | 2025-04-17 01:56:39.491004 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-17 01:56:39.491009 | orchestrator | Thursday 17 April 2025 01:51:08 +0000 (0:00:01.058) 0:06:35.635 ******** 2025-04-17 01:56:39.491013 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491018 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491023 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491028 | orchestrator | 2025-04-17 01:56:39.491033 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-17 01:56:39.491037 | orchestrator | Thursday 17 April 2025 01:51:09 +0000 (0:00:00.309) 0:06:35.944 ******** 2025-04-17 01:56:39.491042 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491047 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491052 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491057 | orchestrator | 2025-04-17 01:56:39.491062 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-17 01:56:39.491066 | orchestrator | Thursday 17 April 2025 01:51:09 +0000 (0:00:00.310) 0:06:36.254 ******** 2025-04-17 01:56:39.491071 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.491076 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.491081 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.491086 | orchestrator | 2025-04-17 01:56:39.491091 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-17 01:56:39.491095 | orchestrator | Thursday 17 April 2025 01:51:10 +0000 (0:00:00.558) 0:06:36.813 ******** 2025-04-17 01:56:39.491100 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.491105 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.491110 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.491114 | orchestrator | 2025-04-17 01:56:39.491119 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-17 01:56:39.491124 | orchestrator | Thursday 17 April 2025 01:51:10 +0000 (0:00:00.309) 0:06:37.122 ******** 2025-04-17 01:56:39.491129 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.491134 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.491138 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.491143 | orchestrator | 2025-04-17 01:56:39.491148 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-17 01:56:39.491153 | orchestrator | Thursday 17 April 2025 01:51:10 +0000 (0:00:00.439) 0:06:37.562 ******** 2025-04-17 01:56:39.491158 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491163 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491167 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491172 | orchestrator | 2025-04-17 01:56:39.491177 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-17 01:56:39.491182 | orchestrator | Thursday 17 April 2025 01:51:11 +0000 (0:00:00.332) 0:06:37.894 ******** 2025-04-17 01:56:39.491187 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491195 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491199 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491204 | orchestrator | 2025-04-17 01:56:39.491209 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-17 01:56:39.491214 | orchestrator | Thursday 17 April 2025 01:51:11 +0000 (0:00:00.606) 0:06:38.501 ******** 2025-04-17 01:56:39.491219 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491223 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491228 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491233 | orchestrator | 2025-04-17 01:56:39.491238 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-17 01:56:39.491243 | orchestrator | Thursday 17 April 2025 01:51:12 +0000 (0:00:00.318) 0:06:38.819 ******** 2025-04-17 01:56:39.491248 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.491256 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.491261 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.491266 | orchestrator | 2025-04-17 01:56:39.491270 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.491275 | orchestrator | Thursday 17 April 2025 01:51:12 +0000 (0:00:00.355) 0:06:39.175 ******** 2025-04-17 01:56:39.491280 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491285 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491290 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491294 | orchestrator | 2025-04-17 01:56:39.491299 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.491304 | orchestrator | Thursday 17 April 2025 01:51:12 +0000 (0:00:00.328) 0:06:39.503 ******** 2025-04-17 01:56:39.491309 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491314 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491318 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491323 | orchestrator | 2025-04-17 01:56:39.491328 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.491345 | orchestrator | Thursday 17 April 2025 01:51:13 +0000 (0:00:00.585) 0:06:40.088 ******** 2025-04-17 01:56:39.491350 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491355 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491360 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491365 | orchestrator | 2025-04-17 01:56:39.491372 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.491377 | orchestrator | Thursday 17 April 2025 01:51:13 +0000 (0:00:00.342) 0:06:40.431 ******** 2025-04-17 01:56:39.491382 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491387 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491392 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491397 | orchestrator | 2025-04-17 01:56:39.491402 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.491407 | orchestrator | Thursday 17 April 2025 01:51:13 +0000 (0:00:00.335) 0:06:40.766 ******** 2025-04-17 01:56:39.491411 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491416 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491421 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491426 | orchestrator | 2025-04-17 01:56:39.491431 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.491436 | orchestrator | Thursday 17 April 2025 01:51:14 +0000 (0:00:00.387) 0:06:41.154 ******** 2025-04-17 01:56:39.491440 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491445 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491450 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491455 | orchestrator | 2025-04-17 01:56:39.491460 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.491465 | orchestrator | Thursday 17 April 2025 01:51:14 +0000 (0:00:00.577) 0:06:41.732 ******** 2025-04-17 01:56:39.491469 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491474 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491479 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491484 | orchestrator | 2025-04-17 01:56:39.491489 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.491494 | orchestrator | Thursday 17 April 2025 01:51:15 +0000 (0:00:00.332) 0:06:42.064 ******** 2025-04-17 01:56:39.491498 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491503 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491508 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491513 | orchestrator | 2025-04-17 01:56:39.491518 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.491523 | orchestrator | Thursday 17 April 2025 01:51:15 +0000 (0:00:00.349) 0:06:42.414 ******** 2025-04-17 01:56:39.491527 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491536 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491568 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491573 | orchestrator | 2025-04-17 01:56:39.491578 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.491583 | orchestrator | Thursday 17 April 2025 01:51:16 +0000 (0:00:00.379) 0:06:42.793 ******** 2025-04-17 01:56:39.491587 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491592 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491597 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491602 | orchestrator | 2025-04-17 01:56:39.491607 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.491612 | orchestrator | Thursday 17 April 2025 01:51:16 +0000 (0:00:00.324) 0:06:43.117 ******** 2025-04-17 01:56:39.491616 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491621 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491626 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491631 | orchestrator | 2025-04-17 01:56:39.491636 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.491641 | orchestrator | Thursday 17 April 2025 01:51:16 +0000 (0:00:00.607) 0:06:43.725 ******** 2025-04-17 01:56:39.491645 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491650 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491655 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491660 | orchestrator | 2025-04-17 01:56:39.491664 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.491669 | orchestrator | Thursday 17 April 2025 01:51:17 +0000 (0:00:00.313) 0:06:44.039 ******** 2025-04-17 01:56:39.491674 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.491679 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.491684 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.491689 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.491693 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491698 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491703 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.491708 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.491713 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491717 | orchestrator | 2025-04-17 01:56:39.491722 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.491727 | orchestrator | Thursday 17 April 2025 01:51:17 +0000 (0:00:00.385) 0:06:44.424 ******** 2025-04-17 01:56:39.491732 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-17 01:56:39.491739 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-17 01:56:39.491744 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-17 01:56:39.491749 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-17 01:56:39.491754 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491759 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491764 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-17 01:56:39.491768 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-17 01:56:39.491773 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491778 | orchestrator | 2025-04-17 01:56:39.491783 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.491800 | orchestrator | Thursday 17 April 2025 01:51:18 +0000 (0:00:00.664) 0:06:45.088 ******** 2025-04-17 01:56:39.491806 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491813 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491818 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491823 | orchestrator | 2025-04-17 01:56:39.491828 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.491836 | orchestrator | Thursday 17 April 2025 01:51:18 +0000 (0:00:00.380) 0:06:45.469 ******** 2025-04-17 01:56:39.491841 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491846 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491851 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491855 | orchestrator | 2025-04-17 01:56:39.491860 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.491865 | orchestrator | Thursday 17 April 2025 01:51:19 +0000 (0:00:00.370) 0:06:45.839 ******** 2025-04-17 01:56:39.491870 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491875 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491880 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491885 | orchestrator | 2025-04-17 01:56:39.491889 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.491894 | orchestrator | Thursday 17 April 2025 01:51:19 +0000 (0:00:00.322) 0:06:46.162 ******** 2025-04-17 01:56:39.491899 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491904 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491909 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491914 | orchestrator | 2025-04-17 01:56:39.491918 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.491923 | orchestrator | Thursday 17 April 2025 01:51:19 +0000 (0:00:00.570) 0:06:46.733 ******** 2025-04-17 01:56:39.491928 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491933 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491938 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491942 | orchestrator | 2025-04-17 01:56:39.491947 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.491952 | orchestrator | Thursday 17 April 2025 01:51:20 +0000 (0:00:00.349) 0:06:47.082 ******** 2025-04-17 01:56:39.491957 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.491962 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.491967 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.491972 | orchestrator | 2025-04-17 01:56:39.491976 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.491981 | orchestrator | Thursday 17 April 2025 01:51:20 +0000 (0:00:00.348) 0:06:47.431 ******** 2025-04-17 01:56:39.491986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.491991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.491996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.492000 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492005 | orchestrator | 2025-04-17 01:56:39.492010 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.492018 | orchestrator | Thursday 17 April 2025 01:51:21 +0000 (0:00:00.456) 0:06:47.888 ******** 2025-04-17 01:56:39.492023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.492027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.492032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.492037 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492042 | orchestrator | 2025-04-17 01:56:39.492047 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.492052 | orchestrator | Thursday 17 April 2025 01:51:21 +0000 (0:00:00.441) 0:06:48.329 ******** 2025-04-17 01:56:39.492057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.492061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.492066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.492071 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492076 | orchestrator | 2025-04-17 01:56:39.492081 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.492090 | orchestrator | Thursday 17 April 2025 01:51:21 +0000 (0:00:00.401) 0:06:48.731 ******** 2025-04-17 01:56:39.492095 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492100 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492105 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492109 | orchestrator | 2025-04-17 01:56:39.492114 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.492119 | orchestrator | Thursday 17 April 2025 01:51:22 +0000 (0:00:00.565) 0:06:49.296 ******** 2025-04-17 01:56:39.492124 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.492129 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492134 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.492139 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492143 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.492148 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492153 | orchestrator | 2025-04-17 01:56:39.492158 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.492163 | orchestrator | Thursday 17 April 2025 01:51:22 +0000 (0:00:00.460) 0:06:49.757 ******** 2025-04-17 01:56:39.492167 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492172 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492177 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492182 | orchestrator | 2025-04-17 01:56:39.492187 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.492191 | orchestrator | Thursday 17 April 2025 01:51:23 +0000 (0:00:00.339) 0:06:50.096 ******** 2025-04-17 01:56:39.492196 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492201 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492206 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492211 | orchestrator | 2025-04-17 01:56:39.492226 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.492232 | orchestrator | Thursday 17 April 2025 01:51:23 +0000 (0:00:00.333) 0:06:50.430 ******** 2025-04-17 01:56:39.492236 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.492241 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.492246 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492251 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492256 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.492261 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492266 | orchestrator | 2025-04-17 01:56:39.492270 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.492275 | orchestrator | Thursday 17 April 2025 01:51:24 +0000 (0:00:00.853) 0:06:51.284 ******** 2025-04-17 01:56:39.492280 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.492285 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492290 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.492295 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492299 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.492304 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492309 | orchestrator | 2025-04-17 01:56:39.492314 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.492319 | orchestrator | Thursday 17 April 2025 01:51:24 +0000 (0:00:00.365) 0:06:51.650 ******** 2025-04-17 01:56:39.492323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.492328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.492333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.492343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:56:39.492348 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:56:39.492353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:56:39.492358 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492363 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:56:39.492372 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:56:39.492377 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:56:39.492382 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492386 | orchestrator | 2025-04-17 01:56:39.492391 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.492396 | orchestrator | Thursday 17 April 2025 01:51:25 +0000 (0:00:00.623) 0:06:52.273 ******** 2025-04-17 01:56:39.492401 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492406 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492410 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492415 | orchestrator | 2025-04-17 01:56:39.492420 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-17 01:56:39.492425 | orchestrator | Thursday 17 April 2025 01:51:26 +0000 (0:00:00.759) 0:06:53.033 ******** 2025-04-17 01:56:39.492429 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.492434 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492439 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.492444 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492449 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.492453 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492458 | orchestrator | 2025-04-17 01:56:39.492463 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-17 01:56:39.492468 | orchestrator | Thursday 17 April 2025 01:51:26 +0000 (0:00:00.547) 0:06:53.580 ******** 2025-04-17 01:56:39.492473 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492477 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492482 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492487 | orchestrator | 2025-04-17 01:56:39.492492 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-17 01:56:39.492497 | orchestrator | Thursday 17 April 2025 01:51:27 +0000 (0:00:00.767) 0:06:54.347 ******** 2025-04-17 01:56:39.492501 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492506 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492514 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492519 | orchestrator | 2025-04-17 01:56:39.492524 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-04-17 01:56:39.492529 | orchestrator | Thursday 17 April 2025 01:51:28 +0000 (0:00:00.506) 0:06:54.853 ******** 2025-04-17 01:56:39.492534 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.492551 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.492556 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.492561 | orchestrator | 2025-04-17 01:56:39.492566 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-04-17 01:56:39.492571 | orchestrator | Thursday 17 April 2025 01:51:28 +0000 (0:00:00.611) 0:06:55.465 ******** 2025-04-17 01:56:39.492575 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-17 01:56:39.492580 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:56:39.492588 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:56:39.492593 | orchestrator | 2025-04-17 01:56:39.492597 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-04-17 01:56:39.492602 | orchestrator | Thursday 17 April 2025 01:51:29 +0000 (0:00:00.644) 0:06:56.110 ******** 2025-04-17 01:56:39.492622 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.492628 | orchestrator | 2025-04-17 01:56:39.492633 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-04-17 01:56:39.492637 | orchestrator | Thursday 17 April 2025 01:51:29 +0000 (0:00:00.506) 0:06:56.616 ******** 2025-04-17 01:56:39.492642 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492647 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492652 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492657 | orchestrator | 2025-04-17 01:56:39.492661 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-04-17 01:56:39.492666 | orchestrator | Thursday 17 April 2025 01:51:30 +0000 (0:00:00.302) 0:06:56.919 ******** 2025-04-17 01:56:39.492671 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492676 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492680 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492685 | orchestrator | 2025-04-17 01:56:39.492690 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-04-17 01:56:39.492695 | orchestrator | Thursday 17 April 2025 01:51:30 +0000 (0:00:00.608) 0:06:57.527 ******** 2025-04-17 01:56:39.492699 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492704 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492709 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492714 | orchestrator | 2025-04-17 01:56:39.492718 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-04-17 01:56:39.492723 | orchestrator | Thursday 17 April 2025 01:51:31 +0000 (0:00:00.483) 0:06:58.011 ******** 2025-04-17 01:56:39.492728 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492733 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492737 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492742 | orchestrator | 2025-04-17 01:56:39.492747 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-04-17 01:56:39.492752 | orchestrator | Thursday 17 April 2025 01:51:31 +0000 (0:00:00.383) 0:06:58.395 ******** 2025-04-17 01:56:39.492757 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.492761 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.492766 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.492771 | orchestrator | 2025-04-17 01:56:39.492776 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-04-17 01:56:39.492780 | orchestrator | Thursday 17 April 2025 01:51:32 +0000 (0:00:00.635) 0:06:59.030 ******** 2025-04-17 01:56:39.492785 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.492790 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.492795 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.492799 | orchestrator | 2025-04-17 01:56:39.492804 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-04-17 01:56:39.492809 | orchestrator | Thursday 17 April 2025 01:51:32 +0000 (0:00:00.480) 0:06:59.511 ******** 2025-04-17 01:56:39.492814 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-17 01:56:39.492818 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-17 01:56:39.492823 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-17 01:56:39.492828 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-17 01:56:39.492833 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-17 01:56:39.492837 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-17 01:56:39.492842 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-17 01:56:39.492847 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-17 01:56:39.492855 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-17 01:56:39.492859 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-17 01:56:39.492864 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-17 01:56:39.492869 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-17 01:56:39.492876 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-17 01:56:39.492881 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-17 01:56:39.492886 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-17 01:56:39.492891 | orchestrator | 2025-04-17 01:56:39.492895 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-04-17 01:56:39.492900 | orchestrator | Thursday 17 April 2025 01:51:34 +0000 (0:00:02.107) 0:07:01.618 ******** 2025-04-17 01:56:39.492905 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.492910 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.492914 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.492919 | orchestrator | 2025-04-17 01:56:39.492924 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-04-17 01:56:39.492929 | orchestrator | Thursday 17 April 2025 01:51:35 +0000 (0:00:00.295) 0:07:01.913 ******** 2025-04-17 01:56:39.492933 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.492938 | orchestrator | 2025-04-17 01:56:39.492943 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-04-17 01:56:39.492948 | orchestrator | Thursday 17 April 2025 01:51:35 +0000 (0:00:00.771) 0:07:02.684 ******** 2025-04-17 01:56:39.492952 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-17 01:56:39.492971 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-17 01:56:39.492976 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-17 01:56:39.492981 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-04-17 01:56:39.492986 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-04-17 01:56:39.492991 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-04-17 01:56:39.492996 | orchestrator | 2025-04-17 01:56:39.493001 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-04-17 01:56:39.493005 | orchestrator | Thursday 17 April 2025 01:51:36 +0000 (0:00:00.934) 0:07:03.619 ******** 2025-04-17 01:56:39.493010 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:56:39.493015 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.493020 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-17 01:56:39.493025 | orchestrator | 2025-04-17 01:56:39.493029 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-04-17 01:56:39.493034 | orchestrator | Thursday 17 April 2025 01:51:38 +0000 (0:00:01.691) 0:07:05.310 ******** 2025-04-17 01:56:39.493039 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-17 01:56:39.493044 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.493048 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.493056 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-17 01:56:39.493061 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.493065 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.493070 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-17 01:56:39.493075 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.493080 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.493085 | orchestrator | 2025-04-17 01:56:39.493089 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-04-17 01:56:39.493098 | orchestrator | Thursday 17 April 2025 01:51:39 +0000 (0:00:01.349) 0:07:06.660 ******** 2025-04-17 01:56:39.493103 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-17 01:56:39.493108 | orchestrator | 2025-04-17 01:56:39.493113 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-04-17 01:56:39.493118 | orchestrator | Thursday 17 April 2025 01:51:42 +0000 (0:00:02.397) 0:07:09.058 ******** 2025-04-17 01:56:39.493123 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.493127 | orchestrator | 2025-04-17 01:56:39.493132 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-04-17 01:56:39.493139 | orchestrator | Thursday 17 April 2025 01:51:42 +0000 (0:00:00.626) 0:07:09.684 ******** 2025-04-17 01:56:39.493144 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493149 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493154 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493159 | orchestrator | 2025-04-17 01:56:39.493164 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-04-17 01:56:39.493169 | orchestrator | Thursday 17 April 2025 01:51:43 +0000 (0:00:00.530) 0:07:10.215 ******** 2025-04-17 01:56:39.493173 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493178 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493183 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493188 | orchestrator | 2025-04-17 01:56:39.493193 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-04-17 01:56:39.493198 | orchestrator | Thursday 17 April 2025 01:51:43 +0000 (0:00:00.309) 0:07:10.525 ******** 2025-04-17 01:56:39.493202 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493207 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493212 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493217 | orchestrator | 2025-04-17 01:56:39.493221 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-04-17 01:56:39.493226 | orchestrator | Thursday 17 April 2025 01:51:44 +0000 (0:00:00.303) 0:07:10.828 ******** 2025-04-17 01:56:39.493231 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.493236 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.493241 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.493246 | orchestrator | 2025-04-17 01:56:39.493250 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-04-17 01:56:39.493255 | orchestrator | Thursday 17 April 2025 01:51:44 +0000 (0:00:00.307) 0:07:11.136 ******** 2025-04-17 01:56:39.493260 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.493265 | orchestrator | 2025-04-17 01:56:39.493270 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-04-17 01:56:39.493274 | orchestrator | Thursday 17 April 2025 01:51:45 +0000 (0:00:00.801) 0:07:11.937 ******** 2025-04-17 01:56:39.493279 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c', 'data_vg': 'ceph-a9d35e4b-2444-59e0-b6b9-5664c21b8a9c'}) 2025-04-17 01:56:39.493285 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7ebc25b0-9278-5fc8-8be4-afb201f0a343', 'data_vg': 'ceph-7ebc25b0-9278-5fc8-8be4-afb201f0a343'}) 2025-04-17 01:56:39.493290 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-567181ad-d304-5248-b248-9710ecf6a56a', 'data_vg': 'ceph-567181ad-d304-5248-b248-9710ecf6a56a'}) 2025-04-17 01:56:39.493295 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-af980f31-aa48-52cf-851d-a23b8b791ab9', 'data_vg': 'ceph-af980f31-aa48-52cf-851d-a23b8b791ab9'}) 2025-04-17 01:56:39.493311 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b69f2859-f86c-57c9-a956-28222694e166', 'data_vg': 'ceph-b69f2859-f86c-57c9-a956-28222694e166'}) 2025-04-17 01:56:39.493320 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e', 'data_vg': 'ceph-6e7c2b16-a1dd-5b5d-909e-4c9aed3e0c7e'}) 2025-04-17 01:56:39.493325 | orchestrator | 2025-04-17 01:56:39.493329 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-04-17 01:56:39.493334 | orchestrator | Thursday 17 April 2025 01:52:25 +0000 (0:00:39.869) 0:07:51.807 ******** 2025-04-17 01:56:39.493339 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493344 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493349 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493354 | orchestrator | 2025-04-17 01:56:39.493358 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-04-17 01:56:39.493363 | orchestrator | Thursday 17 April 2025 01:52:25 +0000 (0:00:00.501) 0:07:52.309 ******** 2025-04-17 01:56:39.493368 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.493373 | orchestrator | 2025-04-17 01:56:39.493378 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-04-17 01:56:39.493382 | orchestrator | Thursday 17 April 2025 01:52:26 +0000 (0:00:00.568) 0:07:52.877 ******** 2025-04-17 01:56:39.493387 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.493392 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.493397 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.493402 | orchestrator | 2025-04-17 01:56:39.493406 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-04-17 01:56:39.493411 | orchestrator | Thursday 17 April 2025 01:52:26 +0000 (0:00:00.646) 0:07:53.524 ******** 2025-04-17 01:56:39.493416 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.493421 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.493426 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.493430 | orchestrator | 2025-04-17 01:56:39.493435 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-04-17 01:56:39.493440 | orchestrator | Thursday 17 April 2025 01:52:28 +0000 (0:00:01.917) 0:07:55.441 ******** 2025-04-17 01:56:39.493445 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.493450 | orchestrator | 2025-04-17 01:56:39.493454 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-04-17 01:56:39.493459 | orchestrator | Thursday 17 April 2025 01:52:29 +0000 (0:00:00.558) 0:07:55.999 ******** 2025-04-17 01:56:39.493464 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.493471 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.493476 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.493481 | orchestrator | 2025-04-17 01:56:39.493486 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-04-17 01:56:39.493490 | orchestrator | Thursday 17 April 2025 01:52:30 +0000 (0:00:01.378) 0:07:57.378 ******** 2025-04-17 01:56:39.493495 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.493500 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.493505 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.493510 | orchestrator | 2025-04-17 01:56:39.493514 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-04-17 01:56:39.493521 | orchestrator | Thursday 17 April 2025 01:52:31 +0000 (0:00:01.132) 0:07:58.511 ******** 2025-04-17 01:56:39.493526 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.493531 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.493536 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.493551 | orchestrator | 2025-04-17 01:56:39.493556 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-04-17 01:56:39.493564 | orchestrator | Thursday 17 April 2025 01:52:33 +0000 (0:00:01.599) 0:08:00.110 ******** 2025-04-17 01:56:39.493569 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493573 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493578 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493586 | orchestrator | 2025-04-17 01:56:39.493591 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-04-17 01:56:39.493596 | orchestrator | Thursday 17 April 2025 01:52:33 +0000 (0:00:00.296) 0:08:00.407 ******** 2025-04-17 01:56:39.493601 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493606 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493610 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493615 | orchestrator | 2025-04-17 01:56:39.493620 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-04-17 01:56:39.493625 | orchestrator | Thursday 17 April 2025 01:52:34 +0000 (0:00:00.598) 0:08:01.005 ******** 2025-04-17 01:56:39.493630 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-17 01:56:39.493635 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-04-17 01:56:39.493639 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-04-17 01:56:39.493644 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-04-17 01:56:39.493649 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-04-17 01:56:39.493654 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-04-17 01:56:39.493658 | orchestrator | 2025-04-17 01:56:39.493663 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-04-17 01:56:39.493668 | orchestrator | Thursday 17 April 2025 01:52:35 +0000 (0:00:01.026) 0:08:02.031 ******** 2025-04-17 01:56:39.493673 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-04-17 01:56:39.493678 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-04-17 01:56:39.493683 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-04-17 01:56:39.493687 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-04-17 01:56:39.493692 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-04-17 01:56:39.493697 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-04-17 01:56:39.493702 | orchestrator | 2025-04-17 01:56:39.493707 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-04-17 01:56:39.493723 | orchestrator | Thursday 17 April 2025 01:52:38 +0000 (0:00:03.339) 0:08:05.371 ******** 2025-04-17 01:56:39.493728 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493733 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493738 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-17 01:56:39.493743 | orchestrator | 2025-04-17 01:56:39.493748 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-04-17 01:56:39.493752 | orchestrator | Thursday 17 April 2025 01:52:41 +0000 (0:00:02.995) 0:08:08.366 ******** 2025-04-17 01:56:39.493757 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493762 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493767 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-04-17 01:56:39.493772 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-17 01:56:39.493777 | orchestrator | 2025-04-17 01:56:39.493781 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-04-17 01:56:39.493786 | orchestrator | Thursday 17 April 2025 01:52:53 +0000 (0:00:12.372) 0:08:20.739 ******** 2025-04-17 01:56:39.493791 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493796 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493801 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493806 | orchestrator | 2025-04-17 01:56:39.493810 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-04-17 01:56:39.493815 | orchestrator | Thursday 17 April 2025 01:52:54 +0000 (0:00:00.574) 0:08:21.314 ******** 2025-04-17 01:56:39.493820 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493825 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493830 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493834 | orchestrator | 2025-04-17 01:56:39.493839 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-17 01:56:39.493844 | orchestrator | Thursday 17 April 2025 01:52:55 +0000 (0:00:01.142) 0:08:22.456 ******** 2025-04-17 01:56:39.493854 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.493859 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.493864 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.493868 | orchestrator | 2025-04-17 01:56:39.493873 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-17 01:56:39.493878 | orchestrator | Thursday 17 April 2025 01:52:56 +0000 (0:00:00.990) 0:08:23.447 ******** 2025-04-17 01:56:39.493883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.493888 | orchestrator | 2025-04-17 01:56:39.493893 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-04-17 01:56:39.493897 | orchestrator | Thursday 17 April 2025 01:52:57 +0000 (0:00:00.579) 0:08:24.026 ******** 2025-04-17 01:56:39.493902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.493907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.493912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.493917 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493921 | orchestrator | 2025-04-17 01:56:39.493926 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-04-17 01:56:39.493931 | orchestrator | Thursday 17 April 2025 01:52:57 +0000 (0:00:00.390) 0:08:24.417 ******** 2025-04-17 01:56:39.493936 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493940 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493945 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493950 | orchestrator | 2025-04-17 01:56:39.493955 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-04-17 01:56:39.493960 | orchestrator | Thursday 17 April 2025 01:52:57 +0000 (0:00:00.323) 0:08:24.740 ******** 2025-04-17 01:56:39.493965 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493969 | orchestrator | 2025-04-17 01:56:39.493974 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-17 01:56:39.493979 | orchestrator | Thursday 17 April 2025 01:52:58 +0000 (0:00:00.218) 0:08:24.958 ******** 2025-04-17 01:56:39.493984 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.493989 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.493993 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.493998 | orchestrator | 2025-04-17 01:56:39.494003 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-04-17 01:56:39.494008 | orchestrator | Thursday 17 April 2025 01:52:58 +0000 (0:00:00.567) 0:08:25.525 ******** 2025-04-17 01:56:39.494024 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494031 | orchestrator | 2025-04-17 01:56:39.494035 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-04-17 01:56:39.494043 | orchestrator | Thursday 17 April 2025 01:52:58 +0000 (0:00:00.228) 0:08:25.754 ******** 2025-04-17 01:56:39.494048 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494053 | orchestrator | 2025-04-17 01:56:39.494058 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-17 01:56:39.494062 | orchestrator | Thursday 17 April 2025 01:52:59 +0000 (0:00:00.228) 0:08:25.982 ******** 2025-04-17 01:56:39.494067 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494072 | orchestrator | 2025-04-17 01:56:39.494077 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-04-17 01:56:39.494081 | orchestrator | Thursday 17 April 2025 01:52:59 +0000 (0:00:00.126) 0:08:26.109 ******** 2025-04-17 01:56:39.494086 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494091 | orchestrator | 2025-04-17 01:56:39.494096 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-04-17 01:56:39.494101 | orchestrator | Thursday 17 April 2025 01:52:59 +0000 (0:00:00.218) 0:08:26.327 ******** 2025-04-17 01:56:39.494105 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494110 | orchestrator | 2025-04-17 01:56:39.494118 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-04-17 01:56:39.494123 | orchestrator | Thursday 17 April 2025 01:52:59 +0000 (0:00:00.215) 0:08:26.543 ******** 2025-04-17 01:56:39.494128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.494144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.494149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.494154 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494159 | orchestrator | 2025-04-17 01:56:39.494164 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-04-17 01:56:39.494169 | orchestrator | Thursday 17 April 2025 01:53:00 +0000 (0:00:00.398) 0:08:26.941 ******** 2025-04-17 01:56:39.494174 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494179 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494183 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494188 | orchestrator | 2025-04-17 01:56:39.494193 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-04-17 01:56:39.494198 | orchestrator | Thursday 17 April 2025 01:53:00 +0000 (0:00:00.569) 0:08:27.511 ******** 2025-04-17 01:56:39.494203 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494210 | orchestrator | 2025-04-17 01:56:39.494215 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-04-17 01:56:39.494220 | orchestrator | Thursday 17 April 2025 01:53:00 +0000 (0:00:00.227) 0:08:27.738 ******** 2025-04-17 01:56:39.494224 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494229 | orchestrator | 2025-04-17 01:56:39.494234 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.494239 | orchestrator | Thursday 17 April 2025 01:53:01 +0000 (0:00:00.253) 0:08:27.992 ******** 2025-04-17 01:56:39.494244 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.494249 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.494254 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.494258 | orchestrator | 2025-04-17 01:56:39.494263 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-04-17 01:56:39.494268 | orchestrator | 2025-04-17 01:56:39.494273 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-17 01:56:39.494278 | orchestrator | Thursday 17 April 2025 01:53:04 +0000 (0:00:02.839) 0:08:30.832 ******** 2025-04-17 01:56:39.494282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.494288 | orchestrator | 2025-04-17 01:56:39.494293 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-17 01:56:39.494297 | orchestrator | Thursday 17 April 2025 01:53:05 +0000 (0:00:01.285) 0:08:32.118 ******** 2025-04-17 01:56:39.494302 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494307 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.494312 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.494317 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494321 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494326 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.494331 | orchestrator | 2025-04-17 01:56:39.494336 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-17 01:56:39.494341 | orchestrator | Thursday 17 April 2025 01:53:06 +0000 (0:00:00.780) 0:08:32.898 ******** 2025-04-17 01:56:39.494346 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494351 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494355 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494360 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.494365 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.494370 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.494375 | orchestrator | 2025-04-17 01:56:39.494379 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-17 01:56:39.494387 | orchestrator | Thursday 17 April 2025 01:53:07 +0000 (0:00:01.331) 0:08:34.230 ******** 2025-04-17 01:56:39.494399 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494404 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494409 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494414 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.494419 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.494423 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.494428 | orchestrator | 2025-04-17 01:56:39.494433 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-17 01:56:39.494438 | orchestrator | Thursday 17 April 2025 01:53:08 +0000 (0:00:01.209) 0:08:35.440 ******** 2025-04-17 01:56:39.494443 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494447 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494452 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494457 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.494462 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.494467 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.494471 | orchestrator | 2025-04-17 01:56:39.494476 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-17 01:56:39.494481 | orchestrator | Thursday 17 April 2025 01:53:09 +0000 (0:00:00.877) 0:08:36.317 ******** 2025-04-17 01:56:39.494486 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494491 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494496 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.494500 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494505 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.494510 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.494515 | orchestrator | 2025-04-17 01:56:39.494519 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-17 01:56:39.494527 | orchestrator | Thursday 17 April 2025 01:53:10 +0000 (0:00:00.822) 0:08:37.139 ******** 2025-04-17 01:56:39.494532 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494567 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494572 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494577 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494582 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494587 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494592 | orchestrator | 2025-04-17 01:56:39.494597 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-17 01:56:39.494602 | orchestrator | Thursday 17 April 2025 01:53:10 +0000 (0:00:00.562) 0:08:37.702 ******** 2025-04-17 01:56:39.494607 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494612 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494617 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494621 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494641 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494647 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494652 | orchestrator | 2025-04-17 01:56:39.494657 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-17 01:56:39.494661 | orchestrator | Thursday 17 April 2025 01:53:11 +0000 (0:00:00.678) 0:08:38.381 ******** 2025-04-17 01:56:39.494666 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494671 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494676 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494681 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494686 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494694 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494699 | orchestrator | 2025-04-17 01:56:39.494703 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-17 01:56:39.494708 | orchestrator | Thursday 17 April 2025 01:53:12 +0000 (0:00:00.519) 0:08:38.900 ******** 2025-04-17 01:56:39.494713 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494718 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494726 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494731 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494736 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494741 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494746 | orchestrator | 2025-04-17 01:56:39.494751 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-17 01:56:39.494756 | orchestrator | Thursday 17 April 2025 01:53:12 +0000 (0:00:00.675) 0:08:39.576 ******** 2025-04-17 01:56:39.494760 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494765 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494770 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494775 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494780 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494785 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494790 | orchestrator | 2025-04-17 01:56:39.494795 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-17 01:56:39.494799 | orchestrator | Thursday 17 April 2025 01:53:13 +0000 (0:00:00.516) 0:08:40.093 ******** 2025-04-17 01:56:39.494804 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.494809 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.494814 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.494818 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.494825 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.494830 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.494835 | orchestrator | 2025-04-17 01:56:39.494840 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-17 01:56:39.494845 | orchestrator | Thursday 17 April 2025 01:53:14 +0000 (0:00:01.111) 0:08:41.205 ******** 2025-04-17 01:56:39.494849 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494854 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494859 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494864 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494869 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494874 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494879 | orchestrator | 2025-04-17 01:56:39.494883 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-17 01:56:39.494888 | orchestrator | Thursday 17 April 2025 01:53:15 +0000 (0:00:00.566) 0:08:41.772 ******** 2025-04-17 01:56:39.494893 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.494898 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.494902 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.494907 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.494912 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.494917 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.494922 | orchestrator | 2025-04-17 01:56:39.494926 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-17 01:56:39.494931 | orchestrator | Thursday 17 April 2025 01:53:15 +0000 (0:00:00.807) 0:08:42.580 ******** 2025-04-17 01:56:39.494936 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494941 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494946 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494950 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.494955 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.494960 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.494965 | orchestrator | 2025-04-17 01:56:39.494970 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-17 01:56:39.494975 | orchestrator | Thursday 17 April 2025 01:53:16 +0000 (0:00:00.657) 0:08:43.237 ******** 2025-04-17 01:56:39.494979 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.494984 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.494989 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.494994 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.494999 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.495003 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.495024 | orchestrator | 2025-04-17 01:56:39.495029 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-17 01:56:39.495034 | orchestrator | Thursday 17 April 2025 01:53:17 +0000 (0:00:00.869) 0:08:44.107 ******** 2025-04-17 01:56:39.495039 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495044 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495049 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495054 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.495058 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.495063 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.495068 | orchestrator | 2025-04-17 01:56:39.495073 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-17 01:56:39.495078 | orchestrator | Thursday 17 April 2025 01:53:17 +0000 (0:00:00.625) 0:08:44.732 ******** 2025-04-17 01:56:39.495082 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495087 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495092 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495100 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495106 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495111 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495115 | orchestrator | 2025-04-17 01:56:39.495120 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-17 01:56:39.495125 | orchestrator | Thursday 17 April 2025 01:53:18 +0000 (0:00:00.822) 0:08:45.555 ******** 2025-04-17 01:56:39.495130 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495148 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495154 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495159 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495164 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495169 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495177 | orchestrator | 2025-04-17 01:56:39.495182 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-17 01:56:39.495186 | orchestrator | Thursday 17 April 2025 01:53:19 +0000 (0:00:00.617) 0:08:46.173 ******** 2025-04-17 01:56:39.495192 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.495197 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.495201 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.495206 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495211 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495216 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495220 | orchestrator | 2025-04-17 01:56:39.495225 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-17 01:56:39.495230 | orchestrator | Thursday 17 April 2025 01:53:20 +0000 (0:00:00.813) 0:08:46.987 ******** 2025-04-17 01:56:39.495235 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.495239 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.495244 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.495249 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.495254 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.495258 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.495263 | orchestrator | 2025-04-17 01:56:39.495268 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.495272 | orchestrator | Thursday 17 April 2025 01:53:20 +0000 (0:00:00.702) 0:08:47.689 ******** 2025-04-17 01:56:39.495277 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495282 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495287 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495292 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495297 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495301 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495306 | orchestrator | 2025-04-17 01:56:39.495311 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.495318 | orchestrator | Thursday 17 April 2025 01:53:21 +0000 (0:00:00.853) 0:08:48.543 ******** 2025-04-17 01:56:39.495327 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495332 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495337 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495341 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495346 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495351 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495356 | orchestrator | 2025-04-17 01:56:39.495361 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.495365 | orchestrator | Thursday 17 April 2025 01:53:22 +0000 (0:00:00.643) 0:08:49.186 ******** 2025-04-17 01:56:39.495370 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495375 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495380 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495385 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495389 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495394 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495399 | orchestrator | 2025-04-17 01:56:39.495404 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.495409 | orchestrator | Thursday 17 April 2025 01:53:23 +0000 (0:00:00.891) 0:08:50.078 ******** 2025-04-17 01:56:39.495414 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495418 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495423 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495428 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495433 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495438 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495442 | orchestrator | 2025-04-17 01:56:39.495447 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.495452 | orchestrator | Thursday 17 April 2025 01:53:23 +0000 (0:00:00.633) 0:08:50.711 ******** 2025-04-17 01:56:39.495457 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495462 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495466 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495471 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495476 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495484 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495488 | orchestrator | 2025-04-17 01:56:39.495493 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.495498 | orchestrator | Thursday 17 April 2025 01:53:24 +0000 (0:00:00.895) 0:08:51.606 ******** 2025-04-17 01:56:39.495503 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495507 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495512 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495517 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495522 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495527 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495531 | orchestrator | 2025-04-17 01:56:39.495536 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.495552 | orchestrator | Thursday 17 April 2025 01:53:25 +0000 (0:00:00.664) 0:08:52.270 ******** 2025-04-17 01:56:39.495557 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495562 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495567 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495572 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495576 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495581 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495586 | orchestrator | 2025-04-17 01:56:39.495591 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.495596 | orchestrator | Thursday 17 April 2025 01:53:26 +0000 (0:00:00.868) 0:08:53.138 ******** 2025-04-17 01:56:39.495601 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495606 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495611 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495619 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495624 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495629 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495634 | orchestrator | 2025-04-17 01:56:39.495651 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.495658 | orchestrator | Thursday 17 April 2025 01:53:27 +0000 (0:00:00.646) 0:08:53.785 ******** 2025-04-17 01:56:39.495662 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495668 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495672 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495677 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495682 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495687 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495692 | orchestrator | 2025-04-17 01:56:39.495697 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.495702 | orchestrator | Thursday 17 April 2025 01:53:27 +0000 (0:00:00.910) 0:08:54.696 ******** 2025-04-17 01:56:39.495706 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495711 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495716 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495721 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495726 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495731 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495736 | orchestrator | 2025-04-17 01:56:39.495740 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.495745 | orchestrator | Thursday 17 April 2025 01:53:28 +0000 (0:00:00.633) 0:08:55.330 ******** 2025-04-17 01:56:39.495750 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495755 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495760 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495765 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495770 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495774 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495779 | orchestrator | 2025-04-17 01:56:39.495784 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.495789 | orchestrator | Thursday 17 April 2025 01:53:29 +0000 (0:00:00.870) 0:08:56.200 ******** 2025-04-17 01:56:39.495794 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495799 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495803 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495808 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495813 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495818 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495823 | orchestrator | 2025-04-17 01:56:39.495828 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.495833 | orchestrator | Thursday 17 April 2025 01:53:30 +0000 (0:00:00.711) 0:08:56.912 ******** 2025-04-17 01:56:39.495838 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.495843 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-17 01:56:39.495847 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495852 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.495857 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-17 01:56:39.495862 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495867 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.495872 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-17 01:56:39.495877 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495882 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.495886 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.495891 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.495901 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.495906 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.495911 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.495916 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.495920 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.495925 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.495930 | orchestrator | 2025-04-17 01:56:39.495935 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.495940 | orchestrator | Thursday 17 April 2025 01:53:31 +0000 (0:00:01.041) 0:08:57.953 ******** 2025-04-17 01:56:39.495945 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-17 01:56:39.495952 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-17 01:56:39.495957 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.495962 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-17 01:56:39.495966 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-17 01:56:39.495971 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.495978 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-17 01:56:39.495983 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-17 01:56:39.495988 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.495993 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-17 01:56:39.495998 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-17 01:56:39.496002 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496007 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-17 01:56:39.496012 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-17 01:56:39.496017 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496022 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-17 01:56:39.496027 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-17 01:56:39.496031 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496036 | orchestrator | 2025-04-17 01:56:39.496041 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.496046 | orchestrator | Thursday 17 April 2025 01:53:32 +0000 (0:00:00.958) 0:08:58.912 ******** 2025-04-17 01:56:39.496050 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496055 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496060 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496065 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496081 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496087 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496092 | orchestrator | 2025-04-17 01:56:39.496097 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.496102 | orchestrator | Thursday 17 April 2025 01:53:33 +0000 (0:00:00.860) 0:08:59.773 ******** 2025-04-17 01:56:39.496106 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496111 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496116 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496121 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496126 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496130 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496135 | orchestrator | 2025-04-17 01:56:39.496140 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.496145 | orchestrator | Thursday 17 April 2025 01:53:33 +0000 (0:00:00.485) 0:09:00.259 ******** 2025-04-17 01:56:39.496150 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496154 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496159 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496164 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496172 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496176 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496181 | orchestrator | 2025-04-17 01:56:39.496186 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.496191 | orchestrator | Thursday 17 April 2025 01:53:34 +0000 (0:00:00.670) 0:09:00.929 ******** 2025-04-17 01:56:39.496196 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496200 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496205 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496210 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496215 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496219 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496224 | orchestrator | 2025-04-17 01:56:39.496229 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.496234 | orchestrator | Thursday 17 April 2025 01:53:34 +0000 (0:00:00.744) 0:09:01.673 ******** 2025-04-17 01:56:39.496238 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496243 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496248 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496253 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496257 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496262 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496267 | orchestrator | 2025-04-17 01:56:39.496272 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.496277 | orchestrator | Thursday 17 April 2025 01:53:35 +0000 (0:00:00.700) 0:09:02.374 ******** 2025-04-17 01:56:39.496281 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496286 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496291 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496296 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496300 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496305 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496310 | orchestrator | 2025-04-17 01:56:39.496317 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.496322 | orchestrator | Thursday 17 April 2025 01:53:36 +0000 (0:00:00.555) 0:09:02.930 ******** 2025-04-17 01:56:39.496327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.496332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.496336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.496341 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496346 | orchestrator | 2025-04-17 01:56:39.496351 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.496355 | orchestrator | Thursday 17 April 2025 01:53:36 +0000 (0:00:00.314) 0:09:03.244 ******** 2025-04-17 01:56:39.496360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.496365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.496370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.496375 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496379 | orchestrator | 2025-04-17 01:56:39.496384 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.496389 | orchestrator | Thursday 17 April 2025 01:53:37 +0000 (0:00:00.546) 0:09:03.791 ******** 2025-04-17 01:56:39.496394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.496399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.496403 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.496408 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496413 | orchestrator | 2025-04-17 01:56:39.496418 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.496422 | orchestrator | Thursday 17 April 2025 01:53:37 +0000 (0:00:00.746) 0:09:04.537 ******** 2025-04-17 01:56:39.496430 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496435 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496440 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496444 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496449 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496454 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496459 | orchestrator | 2025-04-17 01:56:39.496463 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.496468 | orchestrator | Thursday 17 April 2025 01:53:38 +0000 (0:00:00.657) 0:09:05.194 ******** 2025-04-17 01:56:39.496473 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.496478 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496483 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.496487 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496495 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.496499 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496517 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.496523 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496528 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.496532 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496563 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.496569 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496573 | orchestrator | 2025-04-17 01:56:39.496578 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.496583 | orchestrator | Thursday 17 April 2025 01:53:39 +0000 (0:00:01.259) 0:09:06.454 ******** 2025-04-17 01:56:39.496588 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496593 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496597 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496602 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496607 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496611 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496616 | orchestrator | 2025-04-17 01:56:39.496621 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.496626 | orchestrator | Thursday 17 April 2025 01:53:40 +0000 (0:00:00.633) 0:09:07.087 ******** 2025-04-17 01:56:39.496630 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496635 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496640 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496645 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496649 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496654 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496659 | orchestrator | 2025-04-17 01:56:39.496663 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.496668 | orchestrator | Thursday 17 April 2025 01:53:41 +0000 (0:00:00.967) 0:09:08.055 ******** 2025-04-17 01:56:39.496673 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-17 01:56:39.496678 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496682 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-17 01:56:39.496687 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496692 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-17 01:56:39.496697 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496701 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.496706 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496711 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.496716 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496721 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.496726 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496730 | orchestrator | 2025-04-17 01:56:39.496735 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.496743 | orchestrator | Thursday 17 April 2025 01:53:42 +0000 (0:00:00.904) 0:09:08.959 ******** 2025-04-17 01:56:39.496748 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496753 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496758 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496762 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.496767 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496772 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.496777 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496782 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.496786 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496791 | orchestrator | 2025-04-17 01:56:39.496796 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.496801 | orchestrator | Thursday 17 April 2025 01:53:43 +0000 (0:00:00.959) 0:09:09.919 ******** 2025-04-17 01:56:39.496805 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-17 01:56:39.496810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-17 01:56:39.496815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-17 01:56:39.496820 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-17 01:56:39.496824 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-17 01:56:39.496829 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-17 01:56:39.496834 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496839 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-17 01:56:39.496843 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-17 01:56:39.496848 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-17 01:56:39.496853 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.496862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.496867 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.496877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:56:39.496882 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:56:39.496891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:56:39.496896 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496901 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:56:39.496923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:56:39.496928 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:56:39.496933 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496940 | orchestrator | 2025-04-17 01:56:39.496945 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.496949 | orchestrator | Thursday 17 April 2025 01:53:44 +0000 (0:00:01.554) 0:09:11.473 ******** 2025-04-17 01:56:39.496954 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.496959 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.496964 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.496968 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.496973 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.496978 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.496982 | orchestrator | 2025-04-17 01:56:39.496990 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-17 01:56:39.496995 | orchestrator | Thursday 17 April 2025 01:53:46 +0000 (0:00:01.388) 0:09:12.862 ******** 2025-04-17 01:56:39.497000 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.497005 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.497009 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.497014 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.497019 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497023 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.497028 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497033 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.497038 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497042 | orchestrator | 2025-04-17 01:56:39.497047 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-17 01:56:39.497052 | orchestrator | Thursday 17 April 2025 01:53:47 +0000 (0:00:01.280) 0:09:14.142 ******** 2025-04-17 01:56:39.497057 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.497061 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.497066 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.497071 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497075 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497080 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497088 | orchestrator | 2025-04-17 01:56:39.497093 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-17 01:56:39.497098 | orchestrator | Thursday 17 April 2025 01:53:48 +0000 (0:00:01.202) 0:09:15.344 ******** 2025-04-17 01:56:39.497102 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:39.497107 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:39.497112 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:39.497117 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497122 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497126 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497131 | orchestrator | 2025-04-17 01:56:39.497136 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-04-17 01:56:39.497140 | orchestrator | Thursday 17 April 2025 01:53:49 +0000 (0:00:01.154) 0:09:16.498 ******** 2025-04-17 01:56:39.497145 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.497150 | orchestrator | 2025-04-17 01:56:39.497154 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-04-17 01:56:39.497159 | orchestrator | Thursday 17 April 2025 01:53:53 +0000 (0:00:03.334) 0:09:19.833 ******** 2025-04-17 01:56:39.497164 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.497169 | orchestrator | 2025-04-17 01:56:39.497176 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-04-17 01:56:39.497181 | orchestrator | Thursday 17 April 2025 01:53:54 +0000 (0:00:01.590) 0:09:21.423 ******** 2025-04-17 01:56:39.497186 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.497190 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.497195 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.497200 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.497204 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.497209 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.497214 | orchestrator | 2025-04-17 01:56:39.497218 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-04-17 01:56:39.497223 | orchestrator | Thursday 17 April 2025 01:53:56 +0000 (0:00:01.516) 0:09:22.939 ******** 2025-04-17 01:56:39.497228 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.497232 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.497237 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.497242 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.497246 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.497251 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.497259 | orchestrator | 2025-04-17 01:56:39.497264 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-04-17 01:56:39.497269 | orchestrator | Thursday 17 April 2025 01:53:57 +0000 (0:00:01.016) 0:09:23.956 ******** 2025-04-17 01:56:39.497274 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.497279 | orchestrator | 2025-04-17 01:56:39.497284 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-04-17 01:56:39.497289 | orchestrator | Thursday 17 April 2025 01:53:58 +0000 (0:00:01.299) 0:09:25.256 ******** 2025-04-17 01:56:39.497293 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.497298 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.497303 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.497308 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.497313 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.497317 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.497322 | orchestrator | 2025-04-17 01:56:39.497327 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-04-17 01:56:39.497331 | orchestrator | Thursday 17 April 2025 01:54:00 +0000 (0:00:01.854) 0:09:27.110 ******** 2025-04-17 01:56:39.497336 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.497341 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.497345 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.497350 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.497355 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.497359 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.497364 | orchestrator | 2025-04-17 01:56:39.497369 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-04-17 01:56:39.497377 | orchestrator | Thursday 17 April 2025 01:54:04 +0000 (0:00:03.837) 0:09:30.948 ******** 2025-04-17 01:56:39.497382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.497387 | orchestrator | 2025-04-17 01:56:39.497391 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-04-17 01:56:39.497396 | orchestrator | Thursday 17 April 2025 01:54:05 +0000 (0:00:01.341) 0:09:32.290 ******** 2025-04-17 01:56:39.497401 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.497405 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.497410 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.497415 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497420 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497424 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497429 | orchestrator | 2025-04-17 01:56:39.497434 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-04-17 01:56:39.497438 | orchestrator | Thursday 17 April 2025 01:54:06 +0000 (0:00:00.666) 0:09:32.956 ******** 2025-04-17 01:56:39.497443 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:39.497448 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:39.497452 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.497457 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.497462 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.497469 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:39.497473 | orchestrator | 2025-04-17 01:56:39.497478 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-04-17 01:56:39.497483 | orchestrator | Thursday 17 April 2025 01:54:08 +0000 (0:00:02.460) 0:09:35.417 ******** 2025-04-17 01:56:39.497488 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:39.497492 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:39.497497 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:39.497502 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497506 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497511 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497518 | orchestrator | 2025-04-17 01:56:39.497527 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-04-17 01:56:39.497532 | orchestrator | 2025-04-17 01:56:39.497536 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-17 01:56:39.497552 | orchestrator | Thursday 17 April 2025 01:54:11 +0000 (0:00:02.449) 0:09:37.866 ******** 2025-04-17 01:56:39.497556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.497564 | orchestrator | 2025-04-17 01:56:39.497569 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-17 01:56:39.497573 | orchestrator | Thursday 17 April 2025 01:54:11 +0000 (0:00:00.710) 0:09:38.577 ******** 2025-04-17 01:56:39.497578 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497583 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497588 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497592 | orchestrator | 2025-04-17 01:56:39.497597 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-17 01:56:39.497602 | orchestrator | Thursday 17 April 2025 01:54:12 +0000 (0:00:00.316) 0:09:38.893 ******** 2025-04-17 01:56:39.497607 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497611 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497616 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497621 | orchestrator | 2025-04-17 01:56:39.497625 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-17 01:56:39.497630 | orchestrator | Thursday 17 April 2025 01:54:12 +0000 (0:00:00.679) 0:09:39.572 ******** 2025-04-17 01:56:39.497635 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497639 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497644 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497649 | orchestrator | 2025-04-17 01:56:39.497654 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-17 01:56:39.497658 | orchestrator | Thursday 17 April 2025 01:54:13 +0000 (0:00:00.649) 0:09:40.222 ******** 2025-04-17 01:56:39.497663 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497668 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497672 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497677 | orchestrator | 2025-04-17 01:56:39.497682 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-17 01:56:39.497686 | orchestrator | Thursday 17 April 2025 01:54:14 +0000 (0:00:01.001) 0:09:41.224 ******** 2025-04-17 01:56:39.497691 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497696 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497701 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497705 | orchestrator | 2025-04-17 01:56:39.497712 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-17 01:56:39.497717 | orchestrator | Thursday 17 April 2025 01:54:14 +0000 (0:00:00.312) 0:09:41.536 ******** 2025-04-17 01:56:39.497722 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497727 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497732 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497736 | orchestrator | 2025-04-17 01:56:39.497741 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-17 01:56:39.497746 | orchestrator | Thursday 17 April 2025 01:54:15 +0000 (0:00:00.319) 0:09:41.855 ******** 2025-04-17 01:56:39.497751 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497755 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497760 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497765 | orchestrator | 2025-04-17 01:56:39.497769 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-17 01:56:39.497774 | orchestrator | Thursday 17 April 2025 01:54:15 +0000 (0:00:00.305) 0:09:42.161 ******** 2025-04-17 01:56:39.497779 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497784 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497788 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497793 | orchestrator | 2025-04-17 01:56:39.497802 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-17 01:56:39.497806 | orchestrator | Thursday 17 April 2025 01:54:15 +0000 (0:00:00.532) 0:09:42.693 ******** 2025-04-17 01:56:39.497811 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497818 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497823 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497828 | orchestrator | 2025-04-17 01:56:39.497833 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-17 01:56:39.497837 | orchestrator | Thursday 17 April 2025 01:54:16 +0000 (0:00:00.326) 0:09:43.020 ******** 2025-04-17 01:56:39.497842 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497847 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497851 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497856 | orchestrator | 2025-04-17 01:56:39.497861 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-17 01:56:39.497866 | orchestrator | Thursday 17 April 2025 01:54:16 +0000 (0:00:00.294) 0:09:43.314 ******** 2025-04-17 01:56:39.497870 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497875 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497880 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497885 | orchestrator | 2025-04-17 01:56:39.497889 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-17 01:56:39.497894 | orchestrator | Thursday 17 April 2025 01:54:17 +0000 (0:00:00.716) 0:09:44.030 ******** 2025-04-17 01:56:39.497899 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497903 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497908 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497913 | orchestrator | 2025-04-17 01:56:39.497918 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-17 01:56:39.497922 | orchestrator | Thursday 17 April 2025 01:54:17 +0000 (0:00:00.525) 0:09:44.556 ******** 2025-04-17 01:56:39.497927 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.497932 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.497937 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.497941 | orchestrator | 2025-04-17 01:56:39.497946 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-17 01:56:39.497951 | orchestrator | Thursday 17 April 2025 01:54:18 +0000 (0:00:00.310) 0:09:44.866 ******** 2025-04-17 01:56:39.497955 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497960 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497965 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497969 | orchestrator | 2025-04-17 01:56:39.497974 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-17 01:56:39.497979 | orchestrator | Thursday 17 April 2025 01:54:18 +0000 (0:00:00.326) 0:09:45.193 ******** 2025-04-17 01:56:39.497984 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.497988 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.497993 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.497998 | orchestrator | 2025-04-17 01:56:39.498003 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-17 01:56:39.498007 | orchestrator | Thursday 17 April 2025 01:54:18 +0000 (0:00:00.341) 0:09:45.534 ******** 2025-04-17 01:56:39.498025 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.498031 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.498035 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.498040 | orchestrator | 2025-04-17 01:56:39.498045 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-17 01:56:39.498050 | orchestrator | Thursday 17 April 2025 01:54:19 +0000 (0:00:00.758) 0:09:46.293 ******** 2025-04-17 01:56:39.498054 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498059 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498064 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498071 | orchestrator | 2025-04-17 01:56:39.498076 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-17 01:56:39.498084 | orchestrator | Thursday 17 April 2025 01:54:19 +0000 (0:00:00.336) 0:09:46.629 ******** 2025-04-17 01:56:39.498088 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498093 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498098 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498103 | orchestrator | 2025-04-17 01:56:39.498107 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-17 01:56:39.498112 | orchestrator | Thursday 17 April 2025 01:54:20 +0000 (0:00:00.326) 0:09:46.956 ******** 2025-04-17 01:56:39.498117 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498122 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498126 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498131 | orchestrator | 2025-04-17 01:56:39.498136 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-17 01:56:39.498141 | orchestrator | Thursday 17 April 2025 01:54:20 +0000 (0:00:00.301) 0:09:47.257 ******** 2025-04-17 01:56:39.498145 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.498150 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.498155 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.498159 | orchestrator | 2025-04-17 01:56:39.498164 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.498169 | orchestrator | Thursday 17 April 2025 01:54:21 +0000 (0:00:00.806) 0:09:48.064 ******** 2025-04-17 01:56:39.498174 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498179 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498183 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498188 | orchestrator | 2025-04-17 01:56:39.498195 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.498200 | orchestrator | Thursday 17 April 2025 01:54:21 +0000 (0:00:00.387) 0:09:48.452 ******** 2025-04-17 01:56:39.498205 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498210 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498214 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498219 | orchestrator | 2025-04-17 01:56:39.498224 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.498229 | orchestrator | Thursday 17 April 2025 01:54:22 +0000 (0:00:00.380) 0:09:48.832 ******** 2025-04-17 01:56:39.498233 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498238 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498243 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498247 | orchestrator | 2025-04-17 01:56:39.498252 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.498257 | orchestrator | Thursday 17 April 2025 01:54:22 +0000 (0:00:00.342) 0:09:49.175 ******** 2025-04-17 01:56:39.498262 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498266 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498274 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498279 | orchestrator | 2025-04-17 01:56:39.498283 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.498288 | orchestrator | Thursday 17 April 2025 01:54:23 +0000 (0:00:00.808) 0:09:49.984 ******** 2025-04-17 01:56:39.498293 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498297 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498302 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498307 | orchestrator | 2025-04-17 01:56:39.498312 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.498316 | orchestrator | Thursday 17 April 2025 01:54:23 +0000 (0:00:00.380) 0:09:50.364 ******** 2025-04-17 01:56:39.498321 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498326 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498330 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498335 | orchestrator | 2025-04-17 01:56:39.498340 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.498345 | orchestrator | Thursday 17 April 2025 01:54:23 +0000 (0:00:00.320) 0:09:50.685 ******** 2025-04-17 01:56:39.498352 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498357 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498362 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498366 | orchestrator | 2025-04-17 01:56:39.498371 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.498376 | orchestrator | Thursday 17 April 2025 01:54:24 +0000 (0:00:00.341) 0:09:51.026 ******** 2025-04-17 01:56:39.498381 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498385 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498390 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498395 | orchestrator | 2025-04-17 01:56:39.498400 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.498404 | orchestrator | Thursday 17 April 2025 01:54:24 +0000 (0:00:00.662) 0:09:51.689 ******** 2025-04-17 01:56:39.498409 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498414 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498419 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498423 | orchestrator | 2025-04-17 01:56:39.498428 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.498433 | orchestrator | Thursday 17 April 2025 01:54:25 +0000 (0:00:00.342) 0:09:52.032 ******** 2025-04-17 01:56:39.498438 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498442 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498447 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498452 | orchestrator | 2025-04-17 01:56:39.498457 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.498461 | orchestrator | Thursday 17 April 2025 01:54:25 +0000 (0:00:00.349) 0:09:52.381 ******** 2025-04-17 01:56:39.498466 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498471 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498475 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498480 | orchestrator | 2025-04-17 01:56:39.498485 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.498490 | orchestrator | Thursday 17 April 2025 01:54:25 +0000 (0:00:00.341) 0:09:52.723 ******** 2025-04-17 01:56:39.498494 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498499 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498504 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498508 | orchestrator | 2025-04-17 01:56:39.498513 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.498518 | orchestrator | Thursday 17 April 2025 01:54:26 +0000 (0:00:00.684) 0:09:53.407 ******** 2025-04-17 01:56:39.498523 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.498528 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.498532 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498546 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.498551 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.498555 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498560 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.498567 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.498572 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498577 | orchestrator | 2025-04-17 01:56:39.498581 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.498586 | orchestrator | Thursday 17 April 2025 01:54:27 +0000 (0:00:00.392) 0:09:53.800 ******** 2025-04-17 01:56:39.498591 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-17 01:56:39.498596 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-17 01:56:39.498600 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498605 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-17 01:56:39.498613 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-17 01:56:39.498618 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498623 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-17 01:56:39.498627 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-17 01:56:39.498632 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498640 | orchestrator | 2025-04-17 01:56:39.498644 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.498649 | orchestrator | Thursday 17 April 2025 01:54:27 +0000 (0:00:00.423) 0:09:54.223 ******** 2025-04-17 01:56:39.498654 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498659 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498663 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498668 | orchestrator | 2025-04-17 01:56:39.498673 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.498678 | orchestrator | Thursday 17 April 2025 01:54:27 +0000 (0:00:00.471) 0:09:54.695 ******** 2025-04-17 01:56:39.498682 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498689 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498694 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498699 | orchestrator | 2025-04-17 01:56:39.498703 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.498708 | orchestrator | Thursday 17 April 2025 01:54:28 +0000 (0:00:00.962) 0:09:55.657 ******** 2025-04-17 01:56:39.498713 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498718 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498722 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498727 | orchestrator | 2025-04-17 01:56:39.498732 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.498737 | orchestrator | Thursday 17 April 2025 01:54:29 +0000 (0:00:00.392) 0:09:56.049 ******** 2025-04-17 01:56:39.498741 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498746 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498751 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498756 | orchestrator | 2025-04-17 01:56:39.498760 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.498765 | orchestrator | Thursday 17 April 2025 01:54:29 +0000 (0:00:00.355) 0:09:56.405 ******** 2025-04-17 01:56:39.498770 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498774 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498779 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498784 | orchestrator | 2025-04-17 01:56:39.498789 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.498796 | orchestrator | Thursday 17 April 2025 01:54:30 +0000 (0:00:00.579) 0:09:56.984 ******** 2025-04-17 01:56:39.498800 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498805 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498810 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498814 | orchestrator | 2025-04-17 01:56:39.498819 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.498824 | orchestrator | Thursday 17 April 2025 01:54:30 +0000 (0:00:00.329) 0:09:57.314 ******** 2025-04-17 01:56:39.498829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.498833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.498838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.498843 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498848 | orchestrator | 2025-04-17 01:56:39.498852 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.498857 | orchestrator | Thursday 17 April 2025 01:54:30 +0000 (0:00:00.421) 0:09:57.736 ******** 2025-04-17 01:56:39.498865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.498870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.498875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.498879 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498884 | orchestrator | 2025-04-17 01:56:39.498889 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.498894 | orchestrator | Thursday 17 April 2025 01:54:31 +0000 (0:00:00.431) 0:09:58.167 ******** 2025-04-17 01:56:39.498898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.498903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.498908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.498912 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498917 | orchestrator | 2025-04-17 01:56:39.498922 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.498927 | orchestrator | Thursday 17 April 2025 01:54:31 +0000 (0:00:00.526) 0:09:58.694 ******** 2025-04-17 01:56:39.498931 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498936 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498941 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498945 | orchestrator | 2025-04-17 01:56:39.498950 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.498955 | orchestrator | Thursday 17 April 2025 01:54:32 +0000 (0:00:00.339) 0:09:59.034 ******** 2025-04-17 01:56:39.498960 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.498964 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.498969 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.498974 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.498979 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.498983 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.498988 | orchestrator | 2025-04-17 01:56:39.498993 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.498998 | orchestrator | Thursday 17 April 2025 01:54:33 +0000 (0:00:00.840) 0:09:59.874 ******** 2025-04-17 01:56:39.499002 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499007 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499012 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499017 | orchestrator | 2025-04-17 01:56:39.499021 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.499026 | orchestrator | Thursday 17 April 2025 01:54:33 +0000 (0:00:00.344) 0:10:00.218 ******** 2025-04-17 01:56:39.499031 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499036 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499041 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499045 | orchestrator | 2025-04-17 01:56:39.499050 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.499055 | orchestrator | Thursday 17 April 2025 01:54:33 +0000 (0:00:00.339) 0:10:00.558 ******** 2025-04-17 01:56:39.499060 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.499064 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499069 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.499074 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499079 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.499083 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499088 | orchestrator | 2025-04-17 01:56:39.499095 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.499099 | orchestrator | Thursday 17 April 2025 01:54:34 +0000 (0:00:00.472) 0:10:01.031 ******** 2025-04-17 01:56:39.499104 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.499112 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499117 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.499122 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499127 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.499131 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499136 | orchestrator | 2025-04-17 01:56:39.499141 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.499146 | orchestrator | Thursday 17 April 2025 01:54:34 +0000 (0:00:00.653) 0:10:01.685 ******** 2025-04-17 01:56:39.499150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.499155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.499160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.499164 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499169 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:56:39.499174 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:56:39.499179 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:56:39.499183 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499188 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:56:39.499193 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:56:39.499198 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:56:39.499202 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499207 | orchestrator | 2025-04-17 01:56:39.499212 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.499217 | orchestrator | Thursday 17 April 2025 01:54:35 +0000 (0:00:00.618) 0:10:02.303 ******** 2025-04-17 01:56:39.499222 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499226 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499231 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499236 | orchestrator | 2025-04-17 01:56:39.499241 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-17 01:56:39.499245 | orchestrator | Thursday 17 April 2025 01:54:36 +0000 (0:00:00.777) 0:10:03.081 ******** 2025-04-17 01:56:39.499250 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.499255 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499260 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.499264 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499269 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.499274 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499279 | orchestrator | 2025-04-17 01:56:39.499283 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-17 01:56:39.499288 | orchestrator | Thursday 17 April 2025 01:54:36 +0000 (0:00:00.600) 0:10:03.682 ******** 2025-04-17 01:56:39.499293 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499297 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499302 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499307 | orchestrator | 2025-04-17 01:56:39.499312 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-17 01:56:39.499317 | orchestrator | Thursday 17 April 2025 01:54:37 +0000 (0:00:00.926) 0:10:04.608 ******** 2025-04-17 01:56:39.499321 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499326 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499331 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499336 | orchestrator | 2025-04-17 01:56:39.499341 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-04-17 01:56:39.499346 | orchestrator | Thursday 17 April 2025 01:54:38 +0000 (0:00:00.557) 0:10:05.166 ******** 2025-04-17 01:56:39.499356 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499361 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499366 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-04-17 01:56:39.499371 | orchestrator | 2025-04-17 01:56:39.499375 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-04-17 01:56:39.499384 | orchestrator | Thursday 17 April 2025 01:54:38 +0000 (0:00:00.445) 0:10:05.612 ******** 2025-04-17 01:56:39.499389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-17 01:56:39.499393 | orchestrator | 2025-04-17 01:56:39.499398 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-04-17 01:56:39.499403 | orchestrator | Thursday 17 April 2025 01:54:40 +0000 (0:00:01.959) 0:10:07.572 ******** 2025-04-17 01:56:39.499409 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-04-17 01:56:39.499415 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499420 | orchestrator | 2025-04-17 01:56:39.499425 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-04-17 01:56:39.499429 | orchestrator | Thursday 17 April 2025 01:54:41 +0000 (0:00:00.373) 0:10:07.945 ******** 2025-04-17 01:56:39.499437 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-17 01:56:39.499443 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-17 01:56:39.499448 | orchestrator | 2025-04-17 01:56:39.499452 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-04-17 01:56:39.499457 | orchestrator | Thursday 17 April 2025 01:54:47 +0000 (0:00:06.497) 0:10:14.443 ******** 2025-04-17 01:56:39.499462 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-17 01:56:39.499467 | orchestrator | 2025-04-17 01:56:39.499471 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-04-17 01:56:39.499476 | orchestrator | Thursday 17 April 2025 01:54:50 +0000 (0:00:02.882) 0:10:17.326 ******** 2025-04-17 01:56:39.499481 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.499485 | orchestrator | 2025-04-17 01:56:39.499490 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-04-17 01:56:39.499495 | orchestrator | Thursday 17 April 2025 01:54:51 +0000 (0:00:00.795) 0:10:18.121 ******** 2025-04-17 01:56:39.499500 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-17 01:56:39.499504 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-17 01:56:39.499509 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-17 01:56:39.499514 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-04-17 01:56:39.499519 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-04-17 01:56:39.499523 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-04-17 01:56:39.499528 | orchestrator | 2025-04-17 01:56:39.499533 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-04-17 01:56:39.499580 | orchestrator | Thursday 17 April 2025 01:54:52 +0000 (0:00:00.984) 0:10:19.106 ******** 2025-04-17 01:56:39.499586 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:56:39.499595 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.499600 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-17 01:56:39.499605 | orchestrator | 2025-04-17 01:56:39.499610 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-04-17 01:56:39.499614 | orchestrator | Thursday 17 April 2025 01:54:54 +0000 (0:00:01.717) 0:10:20.823 ******** 2025-04-17 01:56:39.499619 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-17 01:56:39.499624 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.499629 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.499633 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-17 01:56:39.499638 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.499643 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.499648 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-17 01:56:39.499652 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.499657 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.499662 | orchestrator | 2025-04-17 01:56:39.499667 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-04-17 01:56:39.499672 | orchestrator | Thursday 17 April 2025 01:54:55 +0000 (0:00:01.060) 0:10:21.884 ******** 2025-04-17 01:56:39.499676 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.499681 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.499686 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.499691 | orchestrator | 2025-04-17 01:56:39.499695 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-04-17 01:56:39.499700 | orchestrator | Thursday 17 April 2025 01:54:55 +0000 (0:00:00.423) 0:10:22.308 ******** 2025-04-17 01:56:39.499705 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.499710 | orchestrator | 2025-04-17 01:56:39.499715 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-04-17 01:56:39.499719 | orchestrator | Thursday 17 April 2025 01:54:56 +0000 (0:00:00.502) 0:10:22.810 ******** 2025-04-17 01:56:39.499724 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.499729 | orchestrator | 2025-04-17 01:56:39.499734 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-04-17 01:56:39.499738 | orchestrator | Thursday 17 April 2025 01:54:56 +0000 (0:00:00.615) 0:10:23.425 ******** 2025-04-17 01:56:39.499743 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.499748 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.499753 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.499757 | orchestrator | 2025-04-17 01:56:39.499762 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-04-17 01:56:39.499767 | orchestrator | Thursday 17 April 2025 01:54:57 +0000 (0:00:01.126) 0:10:24.551 ******** 2025-04-17 01:56:39.499772 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.499776 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.499781 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.499786 | orchestrator | 2025-04-17 01:56:39.499790 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-04-17 01:56:39.499795 | orchestrator | Thursday 17 April 2025 01:54:58 +0000 (0:00:01.090) 0:10:25.642 ******** 2025-04-17 01:56:39.499805 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.499810 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.499815 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.499820 | orchestrator | 2025-04-17 01:56:39.499825 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-04-17 01:56:39.499830 | orchestrator | Thursday 17 April 2025 01:55:00 +0000 (0:00:01.908) 0:10:27.551 ******** 2025-04-17 01:56:39.499834 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.499839 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.499844 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.499852 | orchestrator | 2025-04-17 01:56:39.499857 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-04-17 01:56:39.499862 | orchestrator | Thursday 17 April 2025 01:55:02 +0000 (0:00:01.863) 0:10:29.414 ******** 2025-04-17 01:56:39.499867 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-04-17 01:56:39.499872 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-04-17 01:56:39.499876 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-04-17 01:56:39.499881 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.499886 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.499891 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.499896 | orchestrator | 2025-04-17 01:56:39.499900 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-17 01:56:39.499905 | orchestrator | Thursday 17 April 2025 01:55:19 +0000 (0:00:17.164) 0:10:46.579 ******** 2025-04-17 01:56:39.499910 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.499915 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.499919 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.499924 | orchestrator | 2025-04-17 01:56:39.499929 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-17 01:56:39.499934 | orchestrator | Thursday 17 April 2025 01:55:20 +0000 (0:00:00.735) 0:10:47.314 ******** 2025-04-17 01:56:39.499939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.499943 | orchestrator | 2025-04-17 01:56:39.499948 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-04-17 01:56:39.499953 | orchestrator | Thursday 17 April 2025 01:55:21 +0000 (0:00:00.915) 0:10:48.230 ******** 2025-04-17 01:56:39.499958 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.499962 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.499967 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.499972 | orchestrator | 2025-04-17 01:56:39.499977 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-17 01:56:39.499981 | orchestrator | Thursday 17 April 2025 01:55:21 +0000 (0:00:00.345) 0:10:48.575 ******** 2025-04-17 01:56:39.499986 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.499991 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.499996 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.500000 | orchestrator | 2025-04-17 01:56:39.500005 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-04-17 01:56:39.500010 | orchestrator | Thursday 17 April 2025 01:55:22 +0000 (0:00:01.172) 0:10:49.748 ******** 2025-04-17 01:56:39.500015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.500019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.500024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.500029 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500034 | orchestrator | 2025-04-17 01:56:39.500039 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-17 01:56:39.500043 | orchestrator | Thursday 17 April 2025 01:55:24 +0000 (0:00:01.046) 0:10:50.794 ******** 2025-04-17 01:56:39.500048 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500053 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500058 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500063 | orchestrator | 2025-04-17 01:56:39.500067 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.500072 | orchestrator | Thursday 17 April 2025 01:55:24 +0000 (0:00:00.363) 0:10:51.158 ******** 2025-04-17 01:56:39.500077 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.500082 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.500087 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.500091 | orchestrator | 2025-04-17 01:56:39.500100 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-17 01:56:39.500104 | orchestrator | 2025-04-17 01:56:39.500109 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-17 01:56:39.500114 | orchestrator | Thursday 17 April 2025 01:55:26 +0000 (0:00:01.932) 0:10:53.090 ******** 2025-04-17 01:56:39.500119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.500126 | orchestrator | 2025-04-17 01:56:39.500131 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-17 01:56:39.500136 | orchestrator | Thursday 17 April 2025 01:55:27 +0000 (0:00:00.733) 0:10:53.823 ******** 2025-04-17 01:56:39.500140 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500145 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500150 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500155 | orchestrator | 2025-04-17 01:56:39.500160 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-17 01:56:39.500164 | orchestrator | Thursday 17 April 2025 01:55:27 +0000 (0:00:00.304) 0:10:54.128 ******** 2025-04-17 01:56:39.500169 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500174 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500179 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500184 | orchestrator | 2025-04-17 01:56:39.500188 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-17 01:56:39.500193 | orchestrator | Thursday 17 April 2025 01:55:28 +0000 (0:00:00.669) 0:10:54.798 ******** 2025-04-17 01:56:39.500198 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500203 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500212 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500217 | orchestrator | 2025-04-17 01:56:39.500222 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-17 01:56:39.500226 | orchestrator | Thursday 17 April 2025 01:55:29 +0000 (0:00:01.011) 0:10:55.810 ******** 2025-04-17 01:56:39.500231 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500236 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500241 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500245 | orchestrator | 2025-04-17 01:56:39.500253 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-17 01:56:39.500258 | orchestrator | Thursday 17 April 2025 01:55:29 +0000 (0:00:00.672) 0:10:56.482 ******** 2025-04-17 01:56:39.500263 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500268 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500272 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500277 | orchestrator | 2025-04-17 01:56:39.500282 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-17 01:56:39.500287 | orchestrator | Thursday 17 April 2025 01:55:30 +0000 (0:00:00.310) 0:10:56.793 ******** 2025-04-17 01:56:39.500292 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500296 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500301 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500306 | orchestrator | 2025-04-17 01:56:39.500311 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-17 01:56:39.500316 | orchestrator | Thursday 17 April 2025 01:55:30 +0000 (0:00:00.319) 0:10:57.112 ******** 2025-04-17 01:56:39.500320 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500325 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500330 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500335 | orchestrator | 2025-04-17 01:56:39.500339 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-17 01:56:39.500344 | orchestrator | Thursday 17 April 2025 01:55:30 +0000 (0:00:00.538) 0:10:57.651 ******** 2025-04-17 01:56:39.500349 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500354 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500359 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500363 | orchestrator | 2025-04-17 01:56:39.500372 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-17 01:56:39.500377 | orchestrator | Thursday 17 April 2025 01:55:31 +0000 (0:00:00.313) 0:10:57.965 ******** 2025-04-17 01:56:39.500382 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500387 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500392 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500396 | orchestrator | 2025-04-17 01:56:39.500401 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-17 01:56:39.500406 | orchestrator | Thursday 17 April 2025 01:55:31 +0000 (0:00:00.339) 0:10:58.305 ******** 2025-04-17 01:56:39.500410 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500415 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500420 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500425 | orchestrator | 2025-04-17 01:56:39.500430 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-17 01:56:39.500434 | orchestrator | Thursday 17 April 2025 01:55:31 +0000 (0:00:00.343) 0:10:58.648 ******** 2025-04-17 01:56:39.500439 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500444 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500449 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500453 | orchestrator | 2025-04-17 01:56:39.500458 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-17 01:56:39.500463 | orchestrator | Thursday 17 April 2025 01:55:33 +0000 (0:00:01.201) 0:10:59.849 ******** 2025-04-17 01:56:39.500468 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500473 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500478 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500482 | orchestrator | 2025-04-17 01:56:39.500487 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-17 01:56:39.500492 | orchestrator | Thursday 17 April 2025 01:55:33 +0000 (0:00:00.323) 0:11:00.173 ******** 2025-04-17 01:56:39.500497 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500502 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500506 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500511 | orchestrator | 2025-04-17 01:56:39.500516 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-17 01:56:39.500521 | orchestrator | Thursday 17 April 2025 01:55:33 +0000 (0:00:00.320) 0:11:00.493 ******** 2025-04-17 01:56:39.500525 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500530 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500535 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500552 | orchestrator | 2025-04-17 01:56:39.500557 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-17 01:56:39.500561 | orchestrator | Thursday 17 April 2025 01:55:34 +0000 (0:00:00.367) 0:11:00.861 ******** 2025-04-17 01:56:39.500566 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500571 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500576 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500580 | orchestrator | 2025-04-17 01:56:39.500585 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-17 01:56:39.500590 | orchestrator | Thursday 17 April 2025 01:55:34 +0000 (0:00:00.581) 0:11:01.443 ******** 2025-04-17 01:56:39.500594 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500599 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500604 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500608 | orchestrator | 2025-04-17 01:56:39.500613 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-17 01:56:39.500618 | orchestrator | Thursday 17 April 2025 01:55:35 +0000 (0:00:00.356) 0:11:01.799 ******** 2025-04-17 01:56:39.500623 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500628 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500632 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500637 | orchestrator | 2025-04-17 01:56:39.500642 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-17 01:56:39.500651 | orchestrator | Thursday 17 April 2025 01:55:35 +0000 (0:00:00.307) 0:11:02.106 ******** 2025-04-17 01:56:39.500656 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500661 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500666 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500670 | orchestrator | 2025-04-17 01:56:39.500677 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-17 01:56:39.500682 | orchestrator | Thursday 17 April 2025 01:55:35 +0000 (0:00:00.299) 0:11:02.405 ******** 2025-04-17 01:56:39.500687 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500692 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500697 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500701 | orchestrator | 2025-04-17 01:56:39.500706 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-17 01:56:39.500711 | orchestrator | Thursday 17 April 2025 01:55:36 +0000 (0:00:00.523) 0:11:02.928 ******** 2025-04-17 01:56:39.500716 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.500720 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.500728 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.500733 | orchestrator | 2025-04-17 01:56:39.500741 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-17 01:56:39.500745 | orchestrator | Thursday 17 April 2025 01:55:36 +0000 (0:00:00.330) 0:11:03.259 ******** 2025-04-17 01:56:39.500750 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500755 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500760 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500765 | orchestrator | 2025-04-17 01:56:39.500769 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-17 01:56:39.500774 | orchestrator | Thursday 17 April 2025 01:55:36 +0000 (0:00:00.352) 0:11:03.611 ******** 2025-04-17 01:56:39.500779 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500784 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500788 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500793 | orchestrator | 2025-04-17 01:56:39.500798 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-17 01:56:39.500803 | orchestrator | Thursday 17 April 2025 01:55:37 +0000 (0:00:00.334) 0:11:03.945 ******** 2025-04-17 01:56:39.500807 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500812 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500817 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500822 | orchestrator | 2025-04-17 01:56:39.500827 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-17 01:56:39.500831 | orchestrator | Thursday 17 April 2025 01:55:37 +0000 (0:00:00.641) 0:11:04.587 ******** 2025-04-17 01:56:39.500836 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500841 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500846 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500850 | orchestrator | 2025-04-17 01:56:39.500855 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-17 01:56:39.500860 | orchestrator | Thursday 17 April 2025 01:55:38 +0000 (0:00:00.315) 0:11:04.903 ******** 2025-04-17 01:56:39.500865 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500869 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500874 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500879 | orchestrator | 2025-04-17 01:56:39.500884 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-17 01:56:39.500888 | orchestrator | Thursday 17 April 2025 01:55:38 +0000 (0:00:00.335) 0:11:05.238 ******** 2025-04-17 01:56:39.500893 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500898 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500903 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500907 | orchestrator | 2025-04-17 01:56:39.500912 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-17 01:56:39.500917 | orchestrator | Thursday 17 April 2025 01:55:38 +0000 (0:00:00.309) 0:11:05.548 ******** 2025-04-17 01:56:39.500926 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500931 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500936 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500940 | orchestrator | 2025-04-17 01:56:39.500945 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-17 01:56:39.500950 | orchestrator | Thursday 17 April 2025 01:55:39 +0000 (0:00:00.624) 0:11:06.172 ******** 2025-04-17 01:56:39.500955 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500960 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500965 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500969 | orchestrator | 2025-04-17 01:56:39.500974 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-17 01:56:39.500979 | orchestrator | Thursday 17 April 2025 01:55:39 +0000 (0:00:00.344) 0:11:06.516 ******** 2025-04-17 01:56:39.500984 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.500988 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.500993 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.500998 | orchestrator | 2025-04-17 01:56:39.501003 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-17 01:56:39.501008 | orchestrator | Thursday 17 April 2025 01:55:40 +0000 (0:00:00.331) 0:11:06.848 ******** 2025-04-17 01:56:39.501012 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501017 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501022 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501027 | orchestrator | 2025-04-17 01:56:39.501031 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-17 01:56:39.501036 | orchestrator | Thursday 17 April 2025 01:55:40 +0000 (0:00:00.359) 0:11:07.207 ******** 2025-04-17 01:56:39.501041 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501046 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501051 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501055 | orchestrator | 2025-04-17 01:56:39.501060 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-17 01:56:39.501065 | orchestrator | Thursday 17 April 2025 01:55:41 +0000 (0:00:00.773) 0:11:07.981 ******** 2025-04-17 01:56:39.501070 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501074 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501079 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501084 | orchestrator | 2025-04-17 01:56:39.501089 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-17 01:56:39.501096 | orchestrator | Thursday 17 April 2025 01:55:41 +0000 (0:00:00.349) 0:11:08.331 ******** 2025-04-17 01:56:39.501101 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.501106 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-17 01:56:39.501110 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501115 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.501120 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-17 01:56:39.501125 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501130 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.501135 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-17 01:56:39.501140 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501144 | orchestrator | 2025-04-17 01:56:39.501149 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-17 01:56:39.501154 | orchestrator | Thursday 17 April 2025 01:55:41 +0000 (0:00:00.368) 0:11:08.699 ******** 2025-04-17 01:56:39.501159 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-17 01:56:39.501166 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-17 01:56:39.501171 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501175 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-17 01:56:39.501184 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-17 01:56:39.501189 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501194 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-17 01:56:39.501199 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-17 01:56:39.501203 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501208 | orchestrator | 2025-04-17 01:56:39.501213 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-17 01:56:39.501217 | orchestrator | Thursday 17 April 2025 01:55:42 +0000 (0:00:00.375) 0:11:09.075 ******** 2025-04-17 01:56:39.501222 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501227 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501232 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501236 | orchestrator | 2025-04-17 01:56:39.501241 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-17 01:56:39.501246 | orchestrator | Thursday 17 April 2025 01:55:43 +0000 (0:00:00.770) 0:11:09.845 ******** 2025-04-17 01:56:39.501250 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501255 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501260 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501264 | orchestrator | 2025-04-17 01:56:39.501269 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:56:39.501274 | orchestrator | Thursday 17 April 2025 01:55:43 +0000 (0:00:00.350) 0:11:10.195 ******** 2025-04-17 01:56:39.501283 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501288 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501295 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501299 | orchestrator | 2025-04-17 01:56:39.501304 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:56:39.501309 | orchestrator | Thursday 17 April 2025 01:55:43 +0000 (0:00:00.350) 0:11:10.546 ******** 2025-04-17 01:56:39.501314 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501318 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501323 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501328 | orchestrator | 2025-04-17 01:56:39.501332 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:56:39.501337 | orchestrator | Thursday 17 April 2025 01:55:44 +0000 (0:00:00.359) 0:11:10.905 ******** 2025-04-17 01:56:39.501342 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501347 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501351 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501356 | orchestrator | 2025-04-17 01:56:39.501361 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:56:39.501365 | orchestrator | Thursday 17 April 2025 01:55:44 +0000 (0:00:00.636) 0:11:11.542 ******** 2025-04-17 01:56:39.501370 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501375 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501379 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501384 | orchestrator | 2025-04-17 01:56:39.501389 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:56:39.501394 | orchestrator | Thursday 17 April 2025 01:55:45 +0000 (0:00:00.293) 0:11:11.836 ******** 2025-04-17 01:56:39.501398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.501403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.501408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.501413 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501417 | orchestrator | 2025-04-17 01:56:39.501422 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:56:39.501427 | orchestrator | Thursday 17 April 2025 01:55:45 +0000 (0:00:00.404) 0:11:12.240 ******** 2025-04-17 01:56:39.501434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.501439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.501444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.501449 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501454 | orchestrator | 2025-04-17 01:56:39.501458 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:56:39.501463 | orchestrator | Thursday 17 April 2025 01:55:45 +0000 (0:00:00.362) 0:11:12.602 ******** 2025-04-17 01:56:39.501468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.501473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.501477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.501482 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501487 | orchestrator | 2025-04-17 01:56:39.501492 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.501498 | orchestrator | Thursday 17 April 2025 01:55:46 +0000 (0:00:00.395) 0:11:12.997 ******** 2025-04-17 01:56:39.501503 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501508 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501513 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501518 | orchestrator | 2025-04-17 01:56:39.501522 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:56:39.501527 | orchestrator | Thursday 17 April 2025 01:55:46 +0000 (0:00:00.319) 0:11:13.317 ******** 2025-04-17 01:56:39.501532 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.501575 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501581 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.501586 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501590 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.501595 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501600 | orchestrator | 2025-04-17 01:56:39.501605 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:56:39.501610 | orchestrator | Thursday 17 April 2025 01:55:47 +0000 (0:00:00.608) 0:11:13.925 ******** 2025-04-17 01:56:39.501614 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501619 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501624 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501628 | orchestrator | 2025-04-17 01:56:39.501633 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:56:39.501638 | orchestrator | Thursday 17 April 2025 01:55:47 +0000 (0:00:00.297) 0:11:14.222 ******** 2025-04-17 01:56:39.501643 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501647 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501652 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501657 | orchestrator | 2025-04-17 01:56:39.501662 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:56:39.501666 | orchestrator | Thursday 17 April 2025 01:55:47 +0000 (0:00:00.294) 0:11:14.517 ******** 2025-04-17 01:56:39.501671 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:56:39.501676 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501681 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:56:39.501685 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501690 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:56:39.501695 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501700 | orchestrator | 2025-04-17 01:56:39.501704 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:56:39.501709 | orchestrator | Thursday 17 April 2025 01:55:48 +0000 (0:00:00.381) 0:11:14.898 ******** 2025-04-17 01:56:39.501714 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.501726 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501731 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.501736 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501740 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:56:39.501745 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501750 | orchestrator | 2025-04-17 01:56:39.501755 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:56:39.501759 | orchestrator | Thursday 17 April 2025 01:55:48 +0000 (0:00:00.477) 0:11:15.375 ******** 2025-04-17 01:56:39.501764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.501769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.501774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.501778 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:56:39.501783 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501788 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:56:39.501793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:56:39.501797 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501802 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:56:39.501807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:56:39.501812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:56:39.501816 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501821 | orchestrator | 2025-04-17 01:56:39.501826 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-17 01:56:39.501831 | orchestrator | Thursday 17 April 2025 01:55:49 +0000 (0:00:00.476) 0:11:15.852 ******** 2025-04-17 01:56:39.501835 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501840 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501845 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501849 | orchestrator | 2025-04-17 01:56:39.501854 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-17 01:56:39.501859 | orchestrator | Thursday 17 April 2025 01:55:49 +0000 (0:00:00.611) 0:11:16.463 ******** 2025-04-17 01:56:39.501864 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.501868 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501873 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.501878 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501883 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.501888 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501892 | orchestrator | 2025-04-17 01:56:39.501897 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-17 01:56:39.501902 | orchestrator | Thursday 17 April 2025 01:55:50 +0000 (0:00:00.495) 0:11:16.959 ******** 2025-04-17 01:56:39.501907 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501913 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501918 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501923 | orchestrator | 2025-04-17 01:56:39.501928 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-17 01:56:39.501933 | orchestrator | Thursday 17 April 2025 01:55:50 +0000 (0:00:00.623) 0:11:17.582 ******** 2025-04-17 01:56:39.501937 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.501942 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.501947 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.501952 | orchestrator | 2025-04-17 01:56:39.501959 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-04-17 01:56:39.501963 | orchestrator | Thursday 17 April 2025 01:55:51 +0000 (0:00:00.451) 0:11:18.034 ******** 2025-04-17 01:56:39.501972 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.501976 | orchestrator | 2025-04-17 01:56:39.501981 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-04-17 01:56:39.501986 | orchestrator | Thursday 17 April 2025 01:55:51 +0000 (0:00:00.616) 0:11:18.650 ******** 2025-04-17 01:56:39.501991 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-04-17 01:56:39.501995 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-04-17 01:56:39.502000 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-04-17 01:56:39.502005 | orchestrator | 2025-04-17 01:56:39.502010 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-04-17 01:56:39.502029 | orchestrator | Thursday 17 April 2025 01:55:52 +0000 (0:00:00.638) 0:11:19.288 ******** 2025-04-17 01:56:39.502034 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:56:39.502038 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.502043 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-17 01:56:39.502048 | orchestrator | 2025-04-17 01:56:39.502052 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-04-17 01:56:39.502057 | orchestrator | Thursday 17 April 2025 01:55:54 +0000 (0:00:01.831) 0:11:21.120 ******** 2025-04-17 01:56:39.502062 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-17 01:56:39.502066 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-17 01:56:39.502071 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.502076 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-17 01:56:39.502080 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-17 01:56:39.502085 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.502090 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-17 01:56:39.502094 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-17 01:56:39.502099 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.502104 | orchestrator | 2025-04-17 01:56:39.502108 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-04-17 01:56:39.502113 | orchestrator | Thursday 17 April 2025 01:55:55 +0000 (0:00:01.395) 0:11:22.515 ******** 2025-04-17 01:56:39.502118 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502123 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.502127 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.502132 | orchestrator | 2025-04-17 01:56:39.502137 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-04-17 01:56:39.502142 | orchestrator | Thursday 17 April 2025 01:55:56 +0000 (0:00:00.321) 0:11:22.836 ******** 2025-04-17 01:56:39.502147 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502151 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.502156 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.502161 | orchestrator | 2025-04-17 01:56:39.502165 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-04-17 01:56:39.502170 | orchestrator | Thursday 17 April 2025 01:55:56 +0000 (0:00:00.338) 0:11:23.175 ******** 2025-04-17 01:56:39.502175 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-04-17 01:56:39.502180 | orchestrator | 2025-04-17 01:56:39.502184 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-04-17 01:56:39.502189 | orchestrator | Thursday 17 April 2025 01:55:56 +0000 (0:00:00.223) 0:11:23.398 ******** 2025-04-17 01:56:39.502194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502224 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502228 | orchestrator | 2025-04-17 01:56:39.502233 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-04-17 01:56:39.502238 | orchestrator | Thursday 17 April 2025 01:55:57 +0000 (0:00:00.850) 0:11:24.249 ******** 2025-04-17 01:56:39.502242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502269 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502273 | orchestrator | 2025-04-17 01:56:39.502278 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-04-17 01:56:39.502283 | orchestrator | Thursday 17 April 2025 01:55:58 +0000 (0:00:00.853) 0:11:25.103 ******** 2025-04-17 01:56:39.502290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-17 01:56:39.502314 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502319 | orchestrator | 2025-04-17 01:56:39.502324 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-04-17 01:56:39.502328 | orchestrator | Thursday 17 April 2025 01:55:58 +0000 (0:00:00.614) 0:11:25.717 ******** 2025-04-17 01:56:39.502333 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-17 01:56:39.502338 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-17 01:56:39.502343 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-17 01:56:39.502348 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-17 01:56:39.502352 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-17 01:56:39.502357 | orchestrator | 2025-04-17 01:56:39.502362 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-04-17 01:56:39.502385 | orchestrator | Thursday 17 April 2025 01:56:22 +0000 (0:00:23.715) 0:11:49.433 ******** 2025-04-17 01:56:39.502391 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502395 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.502400 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.502405 | orchestrator | 2025-04-17 01:56:39.502409 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-04-17 01:56:39.502414 | orchestrator | Thursday 17 April 2025 01:56:23 +0000 (0:00:00.483) 0:11:49.916 ******** 2025-04-17 01:56:39.502419 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502423 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.502428 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.502433 | orchestrator | 2025-04-17 01:56:39.502438 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-04-17 01:56:39.502442 | orchestrator | Thursday 17 April 2025 01:56:23 +0000 (0:00:00.323) 0:11:50.240 ******** 2025-04-17 01:56:39.502447 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.502452 | orchestrator | 2025-04-17 01:56:39.502456 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-04-17 01:56:39.502461 | orchestrator | Thursday 17 April 2025 01:56:24 +0000 (0:00:00.533) 0:11:50.774 ******** 2025-04-17 01:56:39.502466 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.502470 | orchestrator | 2025-04-17 01:56:39.502477 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-04-17 01:56:39.502482 | orchestrator | Thursday 17 April 2025 01:56:24 +0000 (0:00:00.833) 0:11:51.607 ******** 2025-04-17 01:56:39.502487 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.502492 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.502496 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.502501 | orchestrator | 2025-04-17 01:56:39.502505 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-04-17 01:56:39.502510 | orchestrator | Thursday 17 April 2025 01:56:26 +0000 (0:00:01.237) 0:11:52.844 ******** 2025-04-17 01:56:39.502515 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.502520 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.502524 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.502529 | orchestrator | 2025-04-17 01:56:39.502534 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-04-17 01:56:39.502565 | orchestrator | Thursday 17 April 2025 01:56:27 +0000 (0:00:01.171) 0:11:54.016 ******** 2025-04-17 01:56:39.502571 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.502576 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.502581 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.502585 | orchestrator | 2025-04-17 01:56:39.502590 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-04-17 01:56:39.502595 | orchestrator | Thursday 17 April 2025 01:56:29 +0000 (0:00:01.968) 0:11:55.984 ******** 2025-04-17 01:56:39.502599 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.502604 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.502609 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-17 01:56:39.502614 | orchestrator | 2025-04-17 01:56:39.502618 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-04-17 01:56:39.502623 | orchestrator | Thursday 17 April 2025 01:56:31 +0000 (0:00:01.884) 0:11:57.869 ******** 2025-04-17 01:56:39.502628 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502632 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:56:39.502649 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:56:39.502654 | orchestrator | 2025-04-17 01:56:39.502659 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-17 01:56:39.502664 | orchestrator | Thursday 17 April 2025 01:56:32 +0000 (0:00:01.142) 0:11:59.012 ******** 2025-04-17 01:56:39.502668 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.502673 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.502678 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.502683 | orchestrator | 2025-04-17 01:56:39.502687 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-17 01:56:39.502692 | orchestrator | Thursday 17 April 2025 01:56:32 +0000 (0:00:00.669) 0:11:59.682 ******** 2025-04-17 01:56:39.502697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:56:39.502704 | orchestrator | 2025-04-17 01:56:39.502709 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-17 01:56:39.502714 | orchestrator | Thursday 17 April 2025 01:56:33 +0000 (0:00:00.764) 0:12:00.447 ******** 2025-04-17 01:56:39.502718 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.502723 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.502728 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.502733 | orchestrator | 2025-04-17 01:56:39.502737 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-17 01:56:39.502742 | orchestrator | Thursday 17 April 2025 01:56:33 +0000 (0:00:00.317) 0:12:00.764 ******** 2025-04-17 01:56:39.502747 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.502751 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.502756 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.502761 | orchestrator | 2025-04-17 01:56:39.502765 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-17 01:56:39.502770 | orchestrator | Thursday 17 April 2025 01:56:35 +0000 (0:00:01.231) 0:12:01.997 ******** 2025-04-17 01:56:39.502775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:56:39.502779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:56:39.502784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:56:39.502789 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:56:39.502794 | orchestrator | 2025-04-17 01:56:39.502798 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-17 01:56:39.502803 | orchestrator | Thursday 17 April 2025 01:56:36 +0000 (0:00:01.112) 0:12:03.109 ******** 2025-04-17 01:56:39.502808 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:56:39.502812 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:56:39.502817 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:56:39.502822 | orchestrator | 2025-04-17 01:56:39.502826 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-17 01:56:39.502831 | orchestrator | Thursday 17 April 2025 01:56:36 +0000 (0:00:00.339) 0:12:03.449 ******** 2025-04-17 01:56:39.502836 | orchestrator | changed: [testbed-node-3] 2025-04-17 01:56:39.502840 | orchestrator | changed: [testbed-node-4] 2025-04-17 01:56:39.502845 | orchestrator | changed: [testbed-node-5] 2025-04-17 01:56:39.502850 | orchestrator | 2025-04-17 01:56:39.502854 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:56:39.502859 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-04-17 01:56:39.502865 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-04-17 01:56:39.502870 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-04-17 01:56:39.502875 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-04-17 01:56:39.502884 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-04-17 01:56:39.502889 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-04-17 01:56:39.502893 | orchestrator | 2025-04-17 01:56:39.502898 | orchestrator | 2025-04-17 01:56:39.502903 | orchestrator | 2025-04-17 01:56:39.502910 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:56:42.512442 | orchestrator | Thursday 17 April 2025 01:56:37 +0000 (0:00:01.224) 0:12:04.674 ******** 2025-04-17 01:56:42.512662 | orchestrator | =============================================================================== 2025-04-17 01:56:42.512696 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 39.87s 2025-04-17 01:56:42.512720 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 29.67s 2025-04-17 01:56:42.512745 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 23.72s 2025-04-17 01:56:42.512766 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.48s 2025-04-17 01:56:42.512820 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.16s 2025-04-17 01:56:42.512842 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.46s 2025-04-17 01:56:42.512862 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.37s 2025-04-17 01:56:42.512883 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.11s 2025-04-17 01:56:42.512904 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.35s 2025-04-17 01:56:42.512924 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.50s 2025-04-17 01:56:42.512945 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.49s 2025-04-17 01:56:42.512965 | orchestrator | ceph-config : create ceph initial directories --------------------------- 5.66s 2025-04-17 01:56:42.512984 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.84s 2025-04-17 01:56:42.513004 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.15s 2025-04-17 01:56:42.513024 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 3.84s 2025-04-17 01:56:42.513044 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 3.83s 2025-04-17 01:56:42.513065 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.38s 2025-04-17 01:56:42.513085 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.34s 2025-04-17 01:56:42.513105 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.33s 2025-04-17 01:56:42.513126 | orchestrator | ceph-osd : unset noup flag ---------------------------------------------- 3.00s 2025-04-17 01:56:42.513146 | orchestrator | 2025-04-17 01:56:39 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:42.513169 | orchestrator | 2025-04-17 01:56:39 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:42.513202 | orchestrator | 2025-04-17 01:56:39 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:56:42.513225 | orchestrator | 2025-04-17 01:56:39 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:42.513269 | orchestrator | 2025-04-17 01:56:42 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:42.519396 | orchestrator | 2025-04-17 01:56:42 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:42.520850 | orchestrator | 2025-04-17 01:56:42 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:56:45.581493 | orchestrator | 2025-04-17 01:56:42 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:45.581690 | orchestrator | 2025-04-17 01:56:45 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:45.583503 | orchestrator | 2025-04-17 01:56:45 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:45.585886 | orchestrator | 2025-04-17 01:56:45 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:56:45.586286 | orchestrator | 2025-04-17 01:56:45 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:48.626360 | orchestrator | 2025-04-17 01:56:48 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:48.628184 | orchestrator | 2025-04-17 01:56:48 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state STARTED 2025-04-17 01:56:48.629082 | orchestrator | 2025-04-17 01:56:48 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:56:51.674710 | orchestrator | 2025-04-17 01:56:48 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:51.674856 | orchestrator | 2025-04-17 01:56:51 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:51.675821 | orchestrator | 2025-04-17 01:56:51 | INFO  | Task c72f7546-109f-4112-a626-0d0d86023410 is in state SUCCESS 2025-04-17 01:56:51.677604 | orchestrator | 2025-04-17 01:56:51.677644 | orchestrator | 2025-04-17 01:56:51.677660 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-04-17 01:56:51.677674 | orchestrator | 2025-04-17 01:56:51.677689 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-17 01:56:51.677704 | orchestrator | Thursday 17 April 2025 01:53:24 +0000 (0:00:00.156) 0:00:00.156 ******** 2025-04-17 01:56:51.677718 | orchestrator | ok: [localhost] => { 2025-04-17 01:56:51.677734 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-04-17 01:56:51.677748 | orchestrator | } 2025-04-17 01:56:51.677762 | orchestrator | 2025-04-17 01:56:51.677775 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-04-17 01:56:51.677789 | orchestrator | Thursday 17 April 2025 01:53:24 +0000 (0:00:00.043) 0:00:00.200 ******** 2025-04-17 01:56:51.677803 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-04-17 01:56:51.677819 | orchestrator | ...ignoring 2025-04-17 01:56:51.677833 | orchestrator | 2025-04-17 01:56:51.677846 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-04-17 01:56:51.677860 | orchestrator | Thursday 17 April 2025 01:53:27 +0000 (0:00:02.556) 0:00:02.757 ******** 2025-04-17 01:56:51.677874 | orchestrator | skipping: [localhost] 2025-04-17 01:56:51.677887 | orchestrator | 2025-04-17 01:56:51.677901 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-04-17 01:56:51.677915 | orchestrator | Thursday 17 April 2025 01:53:27 +0000 (0:00:00.074) 0:00:02.832 ******** 2025-04-17 01:56:51.677929 | orchestrator | ok: [localhost] 2025-04-17 01:56:51.677964 | orchestrator | 2025-04-17 01:56:51.677978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:56:51.677992 | orchestrator | 2025-04-17 01:56:51.678006 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:56:51.678071 | orchestrator | Thursday 17 April 2025 01:53:27 +0000 (0:00:00.161) 0:00:02.993 ******** 2025-04-17 01:56:51.678089 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.678103 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.678117 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.678131 | orchestrator | 2025-04-17 01:56:51.678144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:56:51.678158 | orchestrator | Thursday 17 April 2025 01:53:28 +0000 (0:00:00.424) 0:00:03.418 ******** 2025-04-17 01:56:51.678201 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-17 01:56:51.678230 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-17 01:56:51.678245 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-17 01:56:51.678259 | orchestrator | 2025-04-17 01:56:51.678273 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-17 01:56:51.678287 | orchestrator | 2025-04-17 01:56:51.678301 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-17 01:56:51.678321 | orchestrator | Thursday 17 April 2025 01:53:28 +0000 (0:00:00.397) 0:00:03.816 ******** 2025-04-17 01:56:51.678335 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:56:51.678349 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-17 01:56:51.678363 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-17 01:56:51.678376 | orchestrator | 2025-04-17 01:56:51.678390 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-17 01:56:51.678404 | orchestrator | Thursday 17 April 2025 01:53:29 +0000 (0:00:00.598) 0:00:04.414 ******** 2025-04-17 01:56:51.678418 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:51.678433 | orchestrator | 2025-04-17 01:56:51.678548 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-04-17 01:56:51.678568 | orchestrator | Thursday 17 April 2025 01:53:29 +0000 (0:00:00.818) 0:00:05.233 ******** 2025-04-17 01:56:51.678602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.678623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.678744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.678765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.678781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.678803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.678817 | orchestrator | 2025-04-17 01:56:51.678832 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-04-17 01:56:51.678845 | orchestrator | Thursday 17 April 2025 01:53:34 +0000 (0:00:04.324) 0:00:09.557 ******** 2025-04-17 01:56:51.678859 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.678874 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.678888 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.678902 | orchestrator | 2025-04-17 01:56:51.678916 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-04-17 01:56:51.678930 | orchestrator | Thursday 17 April 2025 01:53:34 +0000 (0:00:00.655) 0:00:10.213 ******** 2025-04-17 01:56:51.678943 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.678957 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.678977 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.678991 | orchestrator | 2025-04-17 01:56:51.679005 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-04-17 01:56:51.679018 | orchestrator | Thursday 17 April 2025 01:53:36 +0000 (0:00:01.304) 0:00:11.518 ******** 2025-04-17 01:56:51.679041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.679058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.679080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.679104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.679128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.679143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.679157 | orchestrator | 2025-04-17 01:56:51.679171 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-04-17 01:56:51.679184 | orchestrator | Thursday 17 April 2025 01:53:41 +0000 (0:00:05.197) 0:00:16.715 ******** 2025-04-17 01:56:51.679198 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.679212 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.679226 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.679239 | orchestrator | 2025-04-17 01:56:51.679253 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-04-17 01:56:51.679267 | orchestrator | Thursday 17 April 2025 01:53:42 +0000 (0:00:01.300) 0:00:18.015 ******** 2025-04-17 01:56:51.679280 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:51.679294 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.679308 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:51.679322 | orchestrator | 2025-04-17 01:56:51.679335 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-04-17 01:56:51.679349 | orchestrator | Thursday 17 April 2025 01:53:50 +0000 (0:00:07.959) 0:00:25.975 ******** 2025-04-17 01:56:51.679374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.679400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.679419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.679444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-17 01:56:51.679468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.679482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-17 01:56:51.679497 | orchestrator | 2025-04-17 01:56:51.679510 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-04-17 01:56:51.679559 | orchestrator | Thursday 17 April 2025 01:53:54 +0000 (0:00:03.612) 0:00:29.587 ******** 2025-04-17 01:56:51.679575 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.679589 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:51.679603 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:51.679617 | orchestrator | 2025-04-17 01:56:51.679630 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-04-17 01:56:51.679644 | orchestrator | Thursday 17 April 2025 01:53:55 +0000 (0:00:01.325) 0:00:30.913 ******** 2025-04-17 01:56:51.679658 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.679672 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.679685 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.679699 | orchestrator | 2025-04-17 01:56:51.679713 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-04-17 01:56:51.679726 | orchestrator | Thursday 17 April 2025 01:53:56 +0000 (0:00:00.481) 0:00:31.395 ******** 2025-04-17 01:56:51.679740 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.679753 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.679767 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.679780 | orchestrator | 2025-04-17 01:56:51.679794 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-04-17 01:56:51.679808 | orchestrator | Thursday 17 April 2025 01:53:56 +0000 (0:00:00.563) 0:00:31.958 ******** 2025-04-17 01:56:51.679822 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-04-17 01:56:51.679836 | orchestrator | ...ignoring 2025-04-17 01:56:51.679850 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-04-17 01:56:51.679880 | orchestrator | ...ignoring 2025-04-17 01:56:51.679894 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-04-17 01:56:51.679908 | orchestrator | ...ignoring 2025-04-17 01:56:51.679921 | orchestrator | 2025-04-17 01:56:51.679935 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-04-17 01:56:51.679949 | orchestrator | Thursday 17 April 2025 01:54:07 +0000 (0:00:10.881) 0:00:42.839 ******** 2025-04-17 01:56:51.679963 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.679976 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.679989 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.680003 | orchestrator | 2025-04-17 01:56:51.680016 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-04-17 01:56:51.680030 | orchestrator | Thursday 17 April 2025 01:54:08 +0000 (0:00:00.601) 0:00:43.441 ******** 2025-04-17 01:56:51.680044 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.680058 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.680071 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.680085 | orchestrator | 2025-04-17 01:56:51.680098 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-04-17 01:56:51.680112 | orchestrator | Thursday 17 April 2025 01:54:08 +0000 (0:00:00.667) 0:00:44.109 ******** 2025-04-17 01:56:51.680126 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.680139 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.680153 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.680166 | orchestrator | 2025-04-17 01:56:51.680191 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-04-17 01:56:51.680206 | orchestrator | Thursday 17 April 2025 01:54:09 +0000 (0:00:00.430) 0:00:44.539 ******** 2025-04-17 01:56:51.680219 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.680233 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.680247 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.680261 | orchestrator | 2025-04-17 01:56:51.680274 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-04-17 01:56:51.680288 | orchestrator | Thursday 17 April 2025 01:54:09 +0000 (0:00:00.661) 0:00:45.200 ******** 2025-04-17 01:56:51.680302 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.680315 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.680329 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.680343 | orchestrator | 2025-04-17 01:56:51.680357 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-04-17 01:56:51.680371 | orchestrator | Thursday 17 April 2025 01:54:10 +0000 (0:00:00.565) 0:00:45.766 ******** 2025-04-17 01:56:51.680385 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.680398 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.680412 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.680426 | orchestrator | 2025-04-17 01:56:51.680439 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-17 01:56:51.680453 | orchestrator | Thursday 17 April 2025 01:54:11 +0000 (0:00:00.531) 0:00:46.297 ******** 2025-04-17 01:56:51.680467 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.680481 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.680494 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-04-17 01:56:51.680508 | orchestrator | 2025-04-17 01:56:51.680522 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-04-17 01:56:51.680592 | orchestrator | Thursday 17 April 2025 01:54:11 +0000 (0:00:00.481) 0:00:46.779 ******** 2025-04-17 01:56:51.680607 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.680621 | orchestrator | 2025-04-17 01:56:51.680634 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-04-17 01:56:51.680648 | orchestrator | Thursday 17 April 2025 01:54:22 +0000 (0:00:10.606) 0:00:57.386 ******** 2025-04-17 01:56:51.680661 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.680683 | orchestrator | 2025-04-17 01:56:51.680697 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-17 01:56:51.680711 | orchestrator | Thursday 17 April 2025 01:54:22 +0000 (0:00:00.241) 0:00:57.628 ******** 2025-04-17 01:56:51.680725 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.680738 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.680752 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.680765 | orchestrator | 2025-04-17 01:56:51.680779 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-04-17 01:56:51.680793 | orchestrator | Thursday 17 April 2025 01:54:23 +0000 (0:00:01.624) 0:00:59.252 ******** 2025-04-17 01:56:51.680806 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.680820 | orchestrator | 2025-04-17 01:56:51.680834 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-04-17 01:56:51.680847 | orchestrator | Thursday 17 April 2025 01:54:34 +0000 (0:00:10.861) 0:01:10.114 ******** 2025-04-17 01:56:51.680861 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-04-17 01:56:51.680875 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.680888 | orchestrator | 2025-04-17 01:56:51.680902 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-04-17 01:56:51.680915 | orchestrator | Thursday 17 April 2025 01:54:41 +0000 (0:00:07.131) 0:01:17.245 ******** 2025-04-17 01:56:51.680929 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.680942 | orchestrator | 2025-04-17 01:56:51.680956 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-04-17 01:56:51.680970 | orchestrator | Thursday 17 April 2025 01:54:44 +0000 (0:00:02.471) 0:01:19.716 ******** 2025-04-17 01:56:51.680983 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.680996 | orchestrator | 2025-04-17 01:56:51.681010 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-04-17 01:56:51.681024 | orchestrator | Thursday 17 April 2025 01:54:44 +0000 (0:00:00.108) 0:01:19.825 ******** 2025-04-17 01:56:51.681037 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.681051 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.681065 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.681078 | orchestrator | 2025-04-17 01:56:51.681092 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-04-17 01:56:51.681105 | orchestrator | Thursday 17 April 2025 01:54:44 +0000 (0:00:00.442) 0:01:20.268 ******** 2025-04-17 01:56:51.681117 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.681129 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:51.681141 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:51.681159 | orchestrator | 2025-04-17 01:56:51.681171 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-04-17 01:56:51.681183 | orchestrator | Thursday 17 April 2025 01:54:45 +0000 (0:00:00.443) 0:01:20.711 ******** 2025-04-17 01:56:51.681195 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-17 01:56:51.681208 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:51.681220 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.681232 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:51.681244 | orchestrator | 2025-04-17 01:56:51.681256 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-17 01:56:51.681268 | orchestrator | skipping: no hosts matched 2025-04-17 01:56:51.681280 | orchestrator | 2025-04-17 01:56:51.681292 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-17 01:56:51.681303 | orchestrator | 2025-04-17 01:56:51.681320 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-17 01:56:51.681333 | orchestrator | Thursday 17 April 2025 01:55:03 +0000 (0:00:17.654) 0:01:38.366 ******** 2025-04-17 01:56:51.681345 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:56:51.681357 | orchestrator | 2025-04-17 01:56:51.681375 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-17 01:56:51.681394 | orchestrator | Thursday 17 April 2025 01:55:17 +0000 (0:00:14.472) 0:01:52.838 ******** 2025-04-17 01:56:51.681406 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.681419 | orchestrator | 2025-04-17 01:56:51.681431 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-17 01:56:51.681443 | orchestrator | Thursday 17 April 2025 01:55:38 +0000 (0:00:20.552) 0:02:13.391 ******** 2025-04-17 01:56:51.681456 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.681468 | orchestrator | 2025-04-17 01:56:51.681480 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-17 01:56:51.681492 | orchestrator | 2025-04-17 01:56:51.681505 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-17 01:56:51.681517 | orchestrator | Thursday 17 April 2025 01:55:40 +0000 (0:00:02.555) 0:02:15.946 ******** 2025-04-17 01:56:51.681547 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:56:51.681560 | orchestrator | 2025-04-17 01:56:51.681572 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-17 01:56:51.681584 | orchestrator | Thursday 17 April 2025 01:55:53 +0000 (0:00:12.910) 0:02:28.857 ******** 2025-04-17 01:56:51.681596 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.681609 | orchestrator | 2025-04-17 01:56:51.681621 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-17 01:56:51.681633 | orchestrator | Thursday 17 April 2025 01:56:14 +0000 (0:00:20.521) 0:02:49.379 ******** 2025-04-17 01:56:51.681645 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.681657 | orchestrator | 2025-04-17 01:56:51.681669 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-17 01:56:51.681681 | orchestrator | 2025-04-17 01:56:51.681693 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-17 01:56:51.681705 | orchestrator | Thursday 17 April 2025 01:56:16 +0000 (0:00:02.574) 0:02:51.953 ******** 2025-04-17 01:56:51.681717 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.681730 | orchestrator | 2025-04-17 01:56:51.681742 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-17 01:56:51.681754 | orchestrator | Thursday 17 April 2025 01:56:29 +0000 (0:00:12.461) 0:03:04.415 ******** 2025-04-17 01:56:51.681766 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.681778 | orchestrator | 2025-04-17 01:56:51.681790 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-17 01:56:51.681802 | orchestrator | Thursday 17 April 2025 01:56:33 +0000 (0:00:04.530) 0:03:08.946 ******** 2025-04-17 01:56:51.681815 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.681827 | orchestrator | 2025-04-17 01:56:51.681839 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-17 01:56:51.681851 | orchestrator | 2025-04-17 01:56:51.681863 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-17 01:56:51.681875 | orchestrator | Thursday 17 April 2025 01:56:36 +0000 (0:00:02.590) 0:03:11.536 ******** 2025-04-17 01:56:51.681887 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:56:51.681899 | orchestrator | 2025-04-17 01:56:51.681911 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-04-17 01:56:51.681923 | orchestrator | Thursday 17 April 2025 01:56:36 +0000 (0:00:00.734) 0:03:12.270 ******** 2025-04-17 01:56:51.681935 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.681947 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.681959 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.681971 | orchestrator | 2025-04-17 01:56:51.681984 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-04-17 01:56:51.681995 | orchestrator | Thursday 17 April 2025 01:56:39 +0000 (0:00:02.587) 0:03:14.857 ******** 2025-04-17 01:56:51.682007 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.682048 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.682062 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.682082 | orchestrator | 2025-04-17 01:56:51.682095 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-04-17 01:56:51.682108 | orchestrator | Thursday 17 April 2025 01:56:41 +0000 (0:00:02.118) 0:03:16.976 ******** 2025-04-17 01:56:51.682121 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.682135 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.682148 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.682161 | orchestrator | 2025-04-17 01:56:51.682174 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-04-17 01:56:51.682187 | orchestrator | Thursday 17 April 2025 01:56:44 +0000 (0:00:02.370) 0:03:19.346 ******** 2025-04-17 01:56:51.682200 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.682213 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.682226 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:56:51.682239 | orchestrator | 2025-04-17 01:56:51.682256 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-04-17 01:56:51.682270 | orchestrator | Thursday 17 April 2025 01:56:46 +0000 (0:00:02.220) 0:03:21.566 ******** 2025-04-17 01:56:51.682283 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:56:51.682296 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:56:51.682309 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:56:51.682322 | orchestrator | 2025-04-17 01:56:51.682335 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-17 01:56:51.682348 | orchestrator | Thursday 17 April 2025 01:56:49 +0000 (0:00:03.289) 0:03:24.856 ******** 2025-04-17 01:56:51.682361 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:56:51.682374 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:56:51.682387 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:56:51.682400 | orchestrator | 2025-04-17 01:56:51.682413 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:56:51.682426 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-17 01:56:51.682440 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-04-17 01:56:51.682461 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-17 01:56:54.735014 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-17 01:56:54.735121 | orchestrator | 2025-04-17 01:56:54.735140 | orchestrator | 2025-04-17 01:56:54.735155 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:56:54.735170 | orchestrator | Thursday 17 April 2025 01:56:49 +0000 (0:00:00.348) 0:03:25.205 ******** 2025-04-17 01:56:54.735184 | orchestrator | =============================================================================== 2025-04-17 01:56:54.735198 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.07s 2025-04-17 01:56:54.735211 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 27.38s 2025-04-17 01:56:54.735225 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 17.65s 2025-04-17 01:56:54.735239 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.46s 2025-04-17 01:56:54.735252 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2025-04-17 01:56:54.735266 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.86s 2025-04-17 01:56:54.735279 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.61s 2025-04-17 01:56:54.735292 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 7.96s 2025-04-17 01:56:54.735306 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.13s 2025-04-17 01:56:54.735319 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.20s 2025-04-17 01:56:54.735359 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.13s 2025-04-17 01:56:54.735374 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.53s 2025-04-17 01:56:54.735387 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.32s 2025-04-17 01:56:54.735401 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.61s 2025-04-17 01:56:54.735415 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.29s 2025-04-17 01:56:54.735428 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.59s 2025-04-17 01:56:54.735442 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.59s 2025-04-17 01:56:54.735456 | orchestrator | Check MariaDB service --------------------------------------------------- 2.56s 2025-04-17 01:56:54.735469 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.47s 2025-04-17 01:56:54.735483 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.37s 2025-04-17 01:56:54.735497 | orchestrator | 2025-04-17 01:56:51 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:56:54.735512 | orchestrator | 2025-04-17 01:56:51 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:56:54.735575 | orchestrator | 2025-04-17 01:56:51 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:56:54.735592 | orchestrator | 2025-04-17 01:56:51 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:54.735623 | orchestrator | 2025-04-17 01:56:54 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:57.764755 | orchestrator | 2025-04-17 01:56:54 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:56:57.764986 | orchestrator | 2025-04-17 01:56:54 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:56:57.765015 | orchestrator | 2025-04-17 01:56:54 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:56:57.765030 | orchestrator | 2025-04-17 01:56:54 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:56:57.765073 | orchestrator | 2025-04-17 01:56:57 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:56:57.765700 | orchestrator | 2025-04-17 01:56:57 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:56:57.765732 | orchestrator | 2025-04-17 01:56:57 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:56:57.766509 | orchestrator | 2025-04-17 01:56:57 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:00.805243 | orchestrator | 2025-04-17 01:56:57 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:00.805391 | orchestrator | 2025-04-17 01:57:00 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:00.807126 | orchestrator | 2025-04-17 01:57:00 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:00.807162 | orchestrator | 2025-04-17 01:57:00 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:00.807186 | orchestrator | 2025-04-17 01:57:00 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:03.854842 | orchestrator | 2025-04-17 01:57:00 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:03.855028 | orchestrator | 2025-04-17 01:57:03 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:03.855578 | orchestrator | 2025-04-17 01:57:03 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:03.857233 | orchestrator | 2025-04-17 01:57:03 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:03.857307 | orchestrator | 2025-04-17 01:57:03 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:06.893572 | orchestrator | 2025-04-17 01:57:03 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:06.893688 | orchestrator | 2025-04-17 01:57:06 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:06.894102 | orchestrator | 2025-04-17 01:57:06 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:06.894122 | orchestrator | 2025-04-17 01:57:06 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:06.894634 | orchestrator | 2025-04-17 01:57:06 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:09.924042 | orchestrator | 2025-04-17 01:57:06 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:09.924196 | orchestrator | 2025-04-17 01:57:09 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:09.925028 | orchestrator | 2025-04-17 01:57:09 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:09.925227 | orchestrator | 2025-04-17 01:57:09 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:09.925725 | orchestrator | 2025-04-17 01:57:09 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:09.925772 | orchestrator | 2025-04-17 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:12.968814 | orchestrator | 2025-04-17 01:57:12 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:12.971410 | orchestrator | 2025-04-17 01:57:12 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:12.973383 | orchestrator | 2025-04-17 01:57:12 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:12.974730 | orchestrator | 2025-04-17 01:57:12 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:12.974841 | orchestrator | 2025-04-17 01:57:12 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:16.011980 | orchestrator | 2025-04-17 01:57:16 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:19.054062 | orchestrator | 2025-04-17 01:57:16 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:19.054193 | orchestrator | 2025-04-17 01:57:16 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:19.054213 | orchestrator | 2025-04-17 01:57:16 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:19.054229 | orchestrator | 2025-04-17 01:57:16 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:19.054262 | orchestrator | 2025-04-17 01:57:19 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:19.055886 | orchestrator | 2025-04-17 01:57:19 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:19.058335 | orchestrator | 2025-04-17 01:57:19 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:19.060042 | orchestrator | 2025-04-17 01:57:19 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:19.060162 | orchestrator | 2025-04-17 01:57:19 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:22.118573 | orchestrator | 2025-04-17 01:57:22 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:22.120066 | orchestrator | 2025-04-17 01:57:22 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:22.120110 | orchestrator | 2025-04-17 01:57:22 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:22.122352 | orchestrator | 2025-04-17 01:57:22 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:22.122626 | orchestrator | 2025-04-17 01:57:22 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:25.161335 | orchestrator | 2025-04-17 01:57:25 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:25.163701 | orchestrator | 2025-04-17 01:57:25 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:28.191410 | orchestrator | 2025-04-17 01:57:25 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:28.191562 | orchestrator | 2025-04-17 01:57:25 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:28.191584 | orchestrator | 2025-04-17 01:57:25 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:28.191614 | orchestrator | 2025-04-17 01:57:28 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:28.193056 | orchestrator | 2025-04-17 01:57:28 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:28.194228 | orchestrator | 2025-04-17 01:57:28 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:28.196108 | orchestrator | 2025-04-17 01:57:28 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:31.241118 | orchestrator | 2025-04-17 01:57:28 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:31.241235 | orchestrator | 2025-04-17 01:57:31 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:31.243199 | orchestrator | 2025-04-17 01:57:31 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:31.243235 | orchestrator | 2025-04-17 01:57:31 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:31.244403 | orchestrator | 2025-04-17 01:57:31 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:31.244432 | orchestrator | 2025-04-17 01:57:31 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:34.294348 | orchestrator | 2025-04-17 01:57:34 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:34.296715 | orchestrator | 2025-04-17 01:57:34 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:34.298461 | orchestrator | 2025-04-17 01:57:34 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:34.302659 | orchestrator | 2025-04-17 01:57:34 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:37.354364 | orchestrator | 2025-04-17 01:57:34 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:37.354588 | orchestrator | 2025-04-17 01:57:37 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:37.358363 | orchestrator | 2025-04-17 01:57:37 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:37.358999 | orchestrator | 2025-04-17 01:57:37 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:37.360385 | orchestrator | 2025-04-17 01:57:37 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:40.409603 | orchestrator | 2025-04-17 01:57:37 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:40.409759 | orchestrator | 2025-04-17 01:57:40 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:40.411609 | orchestrator | 2025-04-17 01:57:40 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:40.414236 | orchestrator | 2025-04-17 01:57:40 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:40.415747 | orchestrator | 2025-04-17 01:57:40 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:40.416073 | orchestrator | 2025-04-17 01:57:40 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:43.461115 | orchestrator | 2025-04-17 01:57:43 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:43.462689 | orchestrator | 2025-04-17 01:57:43 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:43.464600 | orchestrator | 2025-04-17 01:57:43 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:43.468007 | orchestrator | 2025-04-17 01:57:43 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:46.512594 | orchestrator | 2025-04-17 01:57:43 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:46.512748 | orchestrator | 2025-04-17 01:57:46 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:46.513424 | orchestrator | 2025-04-17 01:57:46 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:46.514362 | orchestrator | 2025-04-17 01:57:46 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:46.515533 | orchestrator | 2025-04-17 01:57:46 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:49.553704 | orchestrator | 2025-04-17 01:57:46 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:49.553884 | orchestrator | 2025-04-17 01:57:49 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:49.555291 | orchestrator | 2025-04-17 01:57:49 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:49.556977 | orchestrator | 2025-04-17 01:57:49 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:49.559143 | orchestrator | 2025-04-17 01:57:49 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:52.599657 | orchestrator | 2025-04-17 01:57:49 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:52.599825 | orchestrator | 2025-04-17 01:57:52 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:52.600454 | orchestrator | 2025-04-17 01:57:52 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:52.601924 | orchestrator | 2025-04-17 01:57:52 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:52.603252 | orchestrator | 2025-04-17 01:57:52 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:55.651596 | orchestrator | 2025-04-17 01:57:52 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:55.651780 | orchestrator | 2025-04-17 01:57:55 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:55.652855 | orchestrator | 2025-04-17 01:57:55 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:55.654734 | orchestrator | 2025-04-17 01:57:55 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:55.657033 | orchestrator | 2025-04-17 01:57:55 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:57:55.657241 | orchestrator | 2025-04-17 01:57:55 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:57:58.705287 | orchestrator | 2025-04-17 01:57:58 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:57:58.706939 | orchestrator | 2025-04-17 01:57:58 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:57:58.708368 | orchestrator | 2025-04-17 01:57:58 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:57:58.710708 | orchestrator | 2025-04-17 01:57:58 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:01.763276 | orchestrator | 2025-04-17 01:57:58 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:01.763421 | orchestrator | 2025-04-17 01:58:01 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:01.767274 | orchestrator | 2025-04-17 01:58:01 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:01.770049 | orchestrator | 2025-04-17 01:58:01 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:58:01.772410 | orchestrator | 2025-04-17 01:58:01 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:04.819559 | orchestrator | 2025-04-17 01:58:01 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:04.819728 | orchestrator | 2025-04-17 01:58:04 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:04.821504 | orchestrator | 2025-04-17 01:58:04 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:04.823010 | orchestrator | 2025-04-17 01:58:04 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:58:04.824809 | orchestrator | 2025-04-17 01:58:04 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:07.872380 | orchestrator | 2025-04-17 01:58:04 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:07.872630 | orchestrator | 2025-04-17 01:58:07 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:07.874647 | orchestrator | 2025-04-17 01:58:07 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:07.876216 | orchestrator | 2025-04-17 01:58:07 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:58:07.877862 | orchestrator | 2025-04-17 01:58:07 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:10.922920 | orchestrator | 2025-04-17 01:58:07 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:10.923132 | orchestrator | 2025-04-17 01:58:10 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:10.924082 | orchestrator | 2025-04-17 01:58:10 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:10.926827 | orchestrator | 2025-04-17 01:58:10 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:58:10.928218 | orchestrator | 2025-04-17 01:58:10 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:13.981435 | orchestrator | 2025-04-17 01:58:10 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:13.981769 | orchestrator | 2025-04-17 01:58:13 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:13.982754 | orchestrator | 2025-04-17 01:58:13 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:13.982800 | orchestrator | 2025-04-17 01:58:13 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:58:13.984380 | orchestrator | 2025-04-17 01:58:13 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:17.032120 | orchestrator | 2025-04-17 01:58:13 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:17.032270 | orchestrator | 2025-04-17 01:58:17 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:17.033363 | orchestrator | 2025-04-17 01:58:17 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:17.035171 | orchestrator | 2025-04-17 01:58:17 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:58:17.036655 | orchestrator | 2025-04-17 01:58:17 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:20.083912 | orchestrator | 2025-04-17 01:58:17 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:20.084066 | orchestrator | 2025-04-17 01:58:20 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:20.085501 | orchestrator | 2025-04-17 01:58:20 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:20.087697 | orchestrator | 2025-04-17 01:58:20 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state STARTED 2025-04-17 01:58:20.089099 | orchestrator | 2025-04-17 01:58:20 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:20.089159 | orchestrator | 2025-04-17 01:58:20 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:23.130294 | orchestrator | 2025-04-17 01:58:23 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:23.131392 | orchestrator | 2025-04-17 01:58:23 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:23.132336 | orchestrator | 2025-04-17 01:58:23 | INFO  | Task 66c69db3-e21c-4f90-835d-5a7791d2f010 is in state SUCCESS 2025-04-17 01:58:23.133945 | orchestrator | 2025-04-17 01:58:23.133982 | orchestrator | 2025-04-17 01:58:23.133996 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:58:23.134012 | orchestrator | 2025-04-17 01:58:23.134068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:58:23.134082 | orchestrator | Thursday 17 April 2025 01:56:53 +0000 (0:00:00.381) 0:00:00.381 ******** 2025-04-17 01:58:23.134094 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.134109 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.134121 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.134134 | orchestrator | 2025-04-17 01:58:23.134146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:58:23.134177 | orchestrator | Thursday 17 April 2025 01:56:53 +0000 (0:00:00.336) 0:00:00.718 ******** 2025-04-17 01:58:23.134190 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-04-17 01:58:23.134203 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-04-17 01:58:23.134215 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-04-17 01:58:23.134228 | orchestrator | 2025-04-17 01:58:23.134240 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-04-17 01:58:23.134252 | orchestrator | 2025-04-17 01:58:23.134264 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-17 01:58:23.134276 | orchestrator | Thursday 17 April 2025 01:56:54 +0000 (0:00:00.264) 0:00:00.982 ******** 2025-04-17 01:58:23.134311 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:58:23.134579 | orchestrator | 2025-04-17 01:58:23.134597 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-04-17 01:58:23.134610 | orchestrator | Thursday 17 April 2025 01:56:54 +0000 (0:00:00.596) 0:00:01.579 ******** 2025-04-17 01:58:23.134628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.134658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.134683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.134697 | orchestrator | 2025-04-17 01:58:23.134710 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-04-17 01:58:23.134722 | orchestrator | Thursday 17 April 2025 01:56:55 +0000 (0:00:01.153) 0:00:02.733 ******** 2025-04-17 01:58:23.134735 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.134747 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.134760 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.134772 | orchestrator | 2025-04-17 01:58:23.134784 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-17 01:58:23.134796 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:00.254) 0:00:02.987 ******** 2025-04-17 01:58:23.134816 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-17 01:58:23.134830 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-04-17 01:58:23.134842 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-04-17 01:58:23.134855 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-04-17 01:58:23.134875 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-04-17 01:58:23.134888 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-04-17 01:58:23.134900 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-04-17 01:58:23.134912 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-17 01:58:23.134924 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-04-17 01:58:23.134936 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-04-17 01:58:23.134949 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-04-17 01:58:23.134961 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-04-17 01:58:23.134973 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-04-17 01:58:23.134985 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-04-17 01:58:23.134997 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-17 01:58:23.135009 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-04-17 01:58:23.135021 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-04-17 01:58:23.135034 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-04-17 01:58:23.135046 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-04-17 01:58:23.135058 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-04-17 01:58:23.135070 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-04-17 01:58:23.135083 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-04-17 01:58:23.135104 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-04-17 01:58:23.135117 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-04-17 01:58:23.135130 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-04-17 01:58:23.135142 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-04-17 01:58:23.135155 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-04-17 01:58:23.135168 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-04-17 01:58:23.135180 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-04-17 01:58:23.135194 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-04-17 01:58:23.135207 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-04-17 01:58:23.135221 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-04-17 01:58:23.135242 | orchestrator | 2025-04-17 01:58:23.135261 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.135275 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:00.712) 0:00:03.700 ******** 2025-04-17 01:58:23.135289 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.135303 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.135317 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.135331 | orchestrator | 2025-04-17 01:58:23.135344 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.135358 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.271) 0:00:03.971 ******** 2025-04-17 01:58:23.135371 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.135385 | orchestrator | 2025-04-17 01:58:23.135404 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.135418 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.093) 0:00:04.065 ******** 2025-04-17 01:58:23.135432 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.135483 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.135507 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.135531 | orchestrator | 2025-04-17 01:58:23.135545 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.135557 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.265) 0:00:04.331 ******** 2025-04-17 01:58:23.135569 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.135582 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.135594 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.135606 | orchestrator | 2025-04-17 01:58:23.135618 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.135630 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.236) 0:00:04.568 ******** 2025-04-17 01:58:23.135642 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.135655 | orchestrator | 2025-04-17 01:58:23.135667 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.135679 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.166) 0:00:04.734 ******** 2025-04-17 01:58:23.135691 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.135703 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.135715 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.135733 | orchestrator | 2025-04-17 01:58:23.135745 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.135757 | orchestrator | Thursday 17 April 2025 01:56:58 +0000 (0:00:00.268) 0:00:05.003 ******** 2025-04-17 01:58:23.135770 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.135782 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.135794 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.135806 | orchestrator | 2025-04-17 01:58:23.135818 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.135831 | orchestrator | Thursday 17 April 2025 01:56:58 +0000 (0:00:00.412) 0:00:05.415 ******** 2025-04-17 01:58:23.135843 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.135855 | orchestrator | 2025-04-17 01:58:23.135867 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.135879 | orchestrator | Thursday 17 April 2025 01:56:58 +0000 (0:00:00.107) 0:00:05.523 ******** 2025-04-17 01:58:23.135891 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.135903 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.135915 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.135927 | orchestrator | 2025-04-17 01:58:23.135940 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.135952 | orchestrator | Thursday 17 April 2025 01:56:58 +0000 (0:00:00.343) 0:00:05.867 ******** 2025-04-17 01:58:23.135964 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.135976 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.135988 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.136000 | orchestrator | 2025-04-17 01:58:23.136012 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.136369 | orchestrator | Thursday 17 April 2025 01:56:59 +0000 (0:00:00.462) 0:00:06.329 ******** 2025-04-17 01:58:23.136546 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.136567 | orchestrator | 2025-04-17 01:58:23.136583 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.136597 | orchestrator | Thursday 17 April 2025 01:56:59 +0000 (0:00:00.137) 0:00:06.467 ******** 2025-04-17 01:58:23.136611 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.136625 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.136638 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.136652 | orchestrator | 2025-04-17 01:58:23.136667 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.136681 | orchestrator | Thursday 17 April 2025 01:56:59 +0000 (0:00:00.447) 0:00:06.915 ******** 2025-04-17 01:58:23.136695 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.136710 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.136724 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.136738 | orchestrator | 2025-04-17 01:58:23.136752 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.136765 | orchestrator | Thursday 17 April 2025 01:57:00 +0000 (0:00:00.333) 0:00:07.249 ******** 2025-04-17 01:58:23.136779 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.136792 | orchestrator | 2025-04-17 01:58:23.136806 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.136819 | orchestrator | Thursday 17 April 2025 01:57:00 +0000 (0:00:00.337) 0:00:07.587 ******** 2025-04-17 01:58:23.136833 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.136846 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.136860 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.136873 | orchestrator | 2025-04-17 01:58:23.136887 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.136900 | orchestrator | Thursday 17 April 2025 01:57:00 +0000 (0:00:00.327) 0:00:07.914 ******** 2025-04-17 01:58:23.136914 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.136928 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.136941 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.136955 | orchestrator | 2025-04-17 01:58:23.136996 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.137011 | orchestrator | Thursday 17 April 2025 01:57:01 +0000 (0:00:00.706) 0:00:08.620 ******** 2025-04-17 01:58:23.137025 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137039 | orchestrator | 2025-04-17 01:58:23.137053 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.137066 | orchestrator | Thursday 17 April 2025 01:57:01 +0000 (0:00:00.136) 0:00:08.757 ******** 2025-04-17 01:58:23.137080 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137093 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.137107 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.137120 | orchestrator | 2025-04-17 01:58:23.137134 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.137148 | orchestrator | Thursday 17 April 2025 01:57:02 +0000 (0:00:00.766) 0:00:09.524 ******** 2025-04-17 01:58:23.137198 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.137214 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.137228 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.137241 | orchestrator | 2025-04-17 01:58:23.137255 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.137269 | orchestrator | Thursday 17 April 2025 01:57:03 +0000 (0:00:00.590) 0:00:10.114 ******** 2025-04-17 01:58:23.137283 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137297 | orchestrator | 2025-04-17 01:58:23.137311 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.137325 | orchestrator | Thursday 17 April 2025 01:57:03 +0000 (0:00:00.138) 0:00:10.253 ******** 2025-04-17 01:58:23.137400 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137417 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.137431 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.137463 | orchestrator | 2025-04-17 01:58:23.137478 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.137491 | orchestrator | Thursday 17 April 2025 01:57:03 +0000 (0:00:00.573) 0:00:10.827 ******** 2025-04-17 01:58:23.137505 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.137518 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.137532 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.137545 | orchestrator | 2025-04-17 01:58:23.137559 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.137573 | orchestrator | Thursday 17 April 2025 01:57:04 +0000 (0:00:00.436) 0:00:11.264 ******** 2025-04-17 01:58:23.137586 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137600 | orchestrator | 2025-04-17 01:58:23.137613 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.137627 | orchestrator | Thursday 17 April 2025 01:57:04 +0000 (0:00:00.114) 0:00:11.379 ******** 2025-04-17 01:58:23.137640 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137654 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.137667 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.137680 | orchestrator | 2025-04-17 01:58:23.137694 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.137707 | orchestrator | Thursday 17 April 2025 01:57:04 +0000 (0:00:00.290) 0:00:11.670 ******** 2025-04-17 01:58:23.137721 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.137734 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.137748 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.137761 | orchestrator | 2025-04-17 01:58:23.137775 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.137788 | orchestrator | Thursday 17 April 2025 01:57:05 +0000 (0:00:00.685) 0:00:12.355 ******** 2025-04-17 01:58:23.137802 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137815 | orchestrator | 2025-04-17 01:58:23.137829 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.137842 | orchestrator | Thursday 17 April 2025 01:57:05 +0000 (0:00:00.130) 0:00:12.485 ******** 2025-04-17 01:58:23.137856 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.137869 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.137883 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.137896 | orchestrator | 2025-04-17 01:58:23.137910 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.137924 | orchestrator | Thursday 17 April 2025 01:57:05 +0000 (0:00:00.358) 0:00:12.844 ******** 2025-04-17 01:58:23.137937 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.137951 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.137965 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.137990 | orchestrator | 2025-04-17 01:58:23.138004 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.138070 | orchestrator | Thursday 17 April 2025 01:57:06 +0000 (0:00:00.329) 0:00:13.174 ******** 2025-04-17 01:58:23.138089 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.138104 | orchestrator | 2025-04-17 01:58:23.138118 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.138132 | orchestrator | Thursday 17 April 2025 01:57:06 +0000 (0:00:00.098) 0:00:13.272 ******** 2025-04-17 01:58:23.138146 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.138160 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.138174 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.138188 | orchestrator | 2025-04-17 01:58:23.138201 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-17 01:58:23.138215 | orchestrator | Thursday 17 April 2025 01:57:06 +0000 (0:00:00.338) 0:00:13.610 ******** 2025-04-17 01:58:23.138229 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:58:23.138252 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:58:23.138266 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:58:23.138280 | orchestrator | 2025-04-17 01:58:23.138294 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-17 01:58:23.138308 | orchestrator | Thursday 17 April 2025 01:57:07 +0000 (0:00:00.561) 0:00:14.171 ******** 2025-04-17 01:58:23.138321 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.138335 | orchestrator | 2025-04-17 01:58:23.138354 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-17 01:58:23.138368 | orchestrator | Thursday 17 April 2025 01:57:07 +0000 (0:00:00.244) 0:00:14.416 ******** 2025-04-17 01:58:23.138382 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.138396 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.138409 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.138423 | orchestrator | 2025-04-17 01:58:23.138437 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-04-17 01:58:23.138468 | orchestrator | Thursday 17 April 2025 01:57:08 +0000 (0:00:00.694) 0:00:15.110 ******** 2025-04-17 01:58:23.138482 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:58:23.138496 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:58:23.138509 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:58:23.138523 | orchestrator | 2025-04-17 01:58:23.138537 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-04-17 01:58:23.138550 | orchestrator | Thursday 17 April 2025 01:57:10 +0000 (0:00:02.562) 0:00:17.673 ******** 2025-04-17 01:58:23.138564 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-17 01:58:23.138604 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-17 01:58:23.138633 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-17 01:58:23.138648 | orchestrator | 2025-04-17 01:58:23.138662 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-04-17 01:58:23.138676 | orchestrator | Thursday 17 April 2025 01:57:13 +0000 (0:00:02.666) 0:00:20.339 ******** 2025-04-17 01:58:23.138690 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-17 01:58:23.138706 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-17 01:58:23.138720 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-17 01:58:23.138733 | orchestrator | 2025-04-17 01:58:23.138747 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-04-17 01:58:23.138761 | orchestrator | Thursday 17 April 2025 01:57:16 +0000 (0:00:02.665) 0:00:23.004 ******** 2025-04-17 01:58:23.138775 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-17 01:58:23.138789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-17 01:58:23.138802 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-17 01:58:23.138816 | orchestrator | 2025-04-17 01:58:23.138830 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-04-17 01:58:23.138844 | orchestrator | Thursday 17 April 2025 01:57:18 +0000 (0:00:02.203) 0:00:25.208 ******** 2025-04-17 01:58:23.138858 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.138872 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.138885 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.138899 | orchestrator | 2025-04-17 01:58:23.138913 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-04-17 01:58:23.138927 | orchestrator | Thursday 17 April 2025 01:57:18 +0000 (0:00:00.276) 0:00:25.485 ******** 2025-04-17 01:58:23.138940 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.138962 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.138976 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.138990 | orchestrator | 2025-04-17 01:58:23.139003 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-17 01:58:23.139017 | orchestrator | Thursday 17 April 2025 01:57:18 +0000 (0:00:00.438) 0:00:25.923 ******** 2025-04-17 01:58:23.139031 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:58:23.139045 | orchestrator | 2025-04-17 01:58:23.139059 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-04-17 01:58:23.139072 | orchestrator | Thursday 17 April 2025 01:57:19 +0000 (0:00:00.668) 0:00:26.592 ******** 2025-04-17 01:58:23.139106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.139125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.139160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.139176 | orchestrator | 2025-04-17 01:58:23.139190 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-04-17 01:58:23.139204 | orchestrator | Thursday 17 April 2025 01:57:21 +0000 (0:00:01.812) 0:00:28.405 ******** 2025-04-17 01:58:23.139218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:58:23.139242 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.139268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:58:23.139284 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.139298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:58:23.139322 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.139336 | orchestrator | 2025-04-17 01:58:23.139350 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-04-17 01:58:23.139364 | orchestrator | Thursday 17 April 2025 01:57:22 +0000 (0:00:00.785) 0:00:29.191 ******** 2025-04-17 01:58:23.139388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:58:23.139412 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.139426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:58:23.139470 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.139497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-17 01:58:23.139523 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.139537 | orchestrator | 2025-04-17 01:58:23.139551 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-04-17 01:58:23.139564 | orchestrator | Thursday 17 April 2025 01:57:23 +0000 (0:00:01.114) 0:00:30.305 ******** 2025-04-17 01:58:23.139584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.139601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.139633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-17 01:58:23.139649 | orchestrator | 2025-04-17 01:58:23.139663 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-17 01:58:23.139677 | orchestrator | Thursday 17 April 2025 01:57:28 +0000 (0:00:04.649) 0:00:34.955 ******** 2025-04-17 01:58:23.139697 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:58:23.139711 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:58:23.139725 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:58:23.139738 | orchestrator | 2025-04-17 01:58:23.139752 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-17 01:58:23.139766 | orchestrator | Thursday 17 April 2025 01:57:28 +0000 (0:00:00.300) 0:00:35.255 ******** 2025-04-17 01:58:23.139780 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:58:23.139794 | orchestrator | 2025-04-17 01:58:23.139807 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-04-17 01:58:23.139821 | orchestrator | Thursday 17 April 2025 01:57:28 +0000 (0:00:00.468) 0:00:35.724 ******** 2025-04-17 01:58:23.139834 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:58:23.139848 | orchestrator | 2025-04-17 01:58:23.139861 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-04-17 01:58:23.139875 | orchestrator | Thursday 17 April 2025 01:57:31 +0000 (0:00:02.344) 0:00:38.069 ******** 2025-04-17 01:58:23.139889 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:58:23.139902 | orchestrator | 2025-04-17 01:58:23.139922 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-04-17 01:58:23.139937 | orchestrator | Thursday 17 April 2025 01:57:33 +0000 (0:00:02.163) 0:00:40.232 ******** 2025-04-17 01:58:23.139950 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:58:23.139964 | orchestrator | 2025-04-17 01:58:23.139978 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-17 01:58:23.139991 | orchestrator | Thursday 17 April 2025 01:57:46 +0000 (0:00:13.384) 0:00:53.616 ******** 2025-04-17 01:58:23.140005 | orchestrator | 2025-04-17 01:58:23.140018 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-17 01:58:23.140032 | orchestrator | Thursday 17 April 2025 01:57:46 +0000 (0:00:00.056) 0:00:53.672 ******** 2025-04-17 01:58:23.140045 | orchestrator | 2025-04-17 01:58:23.140059 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-17 01:58:23.140073 | orchestrator | Thursday 17 April 2025 01:57:46 +0000 (0:00:00.168) 0:00:53.840 ******** 2025-04-17 01:58:23.140086 | orchestrator | 2025-04-17 01:58:23.140100 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-04-17 01:58:23.140113 | orchestrator | Thursday 17 April 2025 01:57:46 +0000 (0:00:00.058) 0:00:53.899 ******** 2025-04-17 01:58:23.140127 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:58:23.140141 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:58:23.140154 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:58:23.140168 | orchestrator | 2025-04-17 01:58:23.140181 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:58:23.140195 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-17 01:58:23.140211 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-17 01:58:23.140226 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-17 01:58:23.140239 | orchestrator | 2025-04-17 01:58:23.140253 | orchestrator | 2025-04-17 01:58:23.140267 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:58:23.140280 | orchestrator | Thursday 17 April 2025 01:58:21 +0000 (0:00:34.932) 0:01:28.831 ******** 2025-04-17 01:58:23.140294 | orchestrator | =============================================================================== 2025-04-17 01:58:23.140307 | orchestrator | horizon : Restart horizon container ------------------------------------ 34.93s 2025-04-17 01:58:23.140321 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.38s 2025-04-17 01:58:23.140334 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.65s 2025-04-17 01:58:23.140355 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.67s 2025-04-17 01:58:23.140368 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.66s 2025-04-17 01:58:23.140382 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.56s 2025-04-17 01:58:23.140395 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2025-04-17 01:58:23.140409 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.20s 2025-04-17 01:58:23.140422 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.16s 2025-04-17 01:58:23.140436 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.81s 2025-04-17 01:58:23.140472 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.15s 2025-04-17 01:58:23.140486 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.11s 2025-04-17 01:58:23.140500 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.79s 2025-04-17 01:58:23.140520 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.77s 2025-04-17 01:58:26.185049 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-04-17 01:58:26.185202 | orchestrator | horizon : Update policy file name --------------------------------------- 0.71s 2025-04-17 01:58:26.185222 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.69s 2025-04-17 01:58:26.185263 | orchestrator | horizon : Update policy file name --------------------------------------- 0.69s 2025-04-17 01:58:26.185278 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2025-04-17 01:58:26.185292 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2025-04-17 01:58:26.185308 | orchestrator | 2025-04-17 01:58:23 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:26.185323 | orchestrator | 2025-04-17 01:58:23 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:26.185357 | orchestrator | 2025-04-17 01:58:26 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:26.185619 | orchestrator | 2025-04-17 01:58:26 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:26.186619 | orchestrator | 2025-04-17 01:58:26 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:26.186996 | orchestrator | 2025-04-17 01:58:26 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:29.226626 | orchestrator | 2025-04-17 01:58:29 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:29.227482 | orchestrator | 2025-04-17 01:58:29 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:29.228760 | orchestrator | 2025-04-17 01:58:29 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:32.271516 | orchestrator | 2025-04-17 01:58:29 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:32.271666 | orchestrator | 2025-04-17 01:58:32 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:35.333100 | orchestrator | 2025-04-17 01:58:32 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:35.333274 | orchestrator | 2025-04-17 01:58:32 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:35.333310 | orchestrator | 2025-04-17 01:58:32 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:35.333377 | orchestrator | 2025-04-17 01:58:35 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:35.333831 | orchestrator | 2025-04-17 01:58:35 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:35.333909 | orchestrator | 2025-04-17 01:58:35 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:38.377252 | orchestrator | 2025-04-17 01:58:35 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:38.377488 | orchestrator | 2025-04-17 01:58:38 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:38.379726 | orchestrator | 2025-04-17 01:58:38 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:38.381044 | orchestrator | 2025-04-17 01:58:38 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:41.428797 | orchestrator | 2025-04-17 01:58:38 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:41.428990 | orchestrator | 2025-04-17 01:58:41 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:41.429662 | orchestrator | 2025-04-17 01:58:41 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:41.431416 | orchestrator | 2025-04-17 01:58:41 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:44.490304 | orchestrator | 2025-04-17 01:58:41 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:44.490516 | orchestrator | 2025-04-17 01:58:44 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:44.493450 | orchestrator | 2025-04-17 01:58:44 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:47.532417 | orchestrator | 2025-04-17 01:58:44 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:47.532658 | orchestrator | 2025-04-17 01:58:44 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:47.532700 | orchestrator | 2025-04-17 01:58:47 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:47.535052 | orchestrator | 2025-04-17 01:58:47 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state STARTED 2025-04-17 01:58:47.536873 | orchestrator | 2025-04-17 01:58:47 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:50.585677 | orchestrator | 2025-04-17 01:58:47 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:50.586110 | orchestrator | 2025-04-17 01:58:50 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:50.588349 | orchestrator | 2025-04-17 01:58:50.588930 | orchestrator | 2025-04-17 01:58:50 | INFO  | Task c6ea957c-99b1-450e-ad2c-51f8109f4f70 is in state SUCCESS 2025-04-17 01:58:50.588993 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-17 01:58:50.589020 | orchestrator | 2025-04-17 01:58:50.589044 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-04-17 01:58:50.589067 | orchestrator | 2025-04-17 01:58:50.589091 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-17 01:58:50.589114 | orchestrator | Thursday 17 April 2025 01:56:42 +0000 (0:00:01.132) 0:00:01.132 ******** 2025-04-17 01:58:50.589138 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:58:50.589164 | orchestrator | 2025-04-17 01:58:50.589188 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-17 01:58:50.589211 | orchestrator | Thursday 17 April 2025 01:56:43 +0000 (0:00:00.489) 0:00:01.622 ******** 2025-04-17 01:58:50.589235 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-04-17 01:58:50.589261 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-04-17 01:58:50.589325 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-04-17 01:58:50.589350 | orchestrator | 2025-04-17 01:58:50.589365 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-17 01:58:50.589378 | orchestrator | Thursday 17 April 2025 01:56:43 +0000 (0:00:00.805) 0:00:02.427 ******** 2025-04-17 01:58:50.589392 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:58:50.589407 | orchestrator | 2025-04-17 01:58:50.589494 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-17 01:58:50.589512 | orchestrator | Thursday 17 April 2025 01:56:44 +0000 (0:00:00.709) 0:00:03.137 ******** 2025-04-17 01:58:50.589528 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.589554 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.589577 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.589600 | orchestrator | 2025-04-17 01:58:50.589625 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-17 01:58:50.589643 | orchestrator | Thursday 17 April 2025 01:56:45 +0000 (0:00:00.613) 0:00:03.751 ******** 2025-04-17 01:58:50.589656 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.589670 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.589684 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.589697 | orchestrator | 2025-04-17 01:58:50.589711 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-17 01:58:50.589725 | orchestrator | Thursday 17 April 2025 01:56:45 +0000 (0:00:00.289) 0:00:04.040 ******** 2025-04-17 01:58:50.589745 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.589776 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.589803 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.589826 | orchestrator | 2025-04-17 01:58:50.589860 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-17 01:58:50.589884 | orchestrator | Thursday 17 April 2025 01:56:46 +0000 (0:00:00.824) 0:00:04.864 ******** 2025-04-17 01:58:50.589907 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.589932 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.589956 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.589979 | orchestrator | 2025-04-17 01:58:50.590002 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-17 01:58:50.590095 | orchestrator | Thursday 17 April 2025 01:56:46 +0000 (0:00:00.303) 0:00:05.167 ******** 2025-04-17 01:58:50.590119 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.590143 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.590166 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.590215 | orchestrator | 2025-04-17 01:58:50.590241 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-17 01:58:50.590268 | orchestrator | Thursday 17 April 2025 01:56:46 +0000 (0:00:00.337) 0:00:05.505 ******** 2025-04-17 01:58:50.590292 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.590307 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.590321 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.590334 | orchestrator | 2025-04-17 01:58:50.590349 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-17 01:58:50.590363 | orchestrator | Thursday 17 April 2025 01:56:47 +0000 (0:00:00.368) 0:00:05.873 ******** 2025-04-17 01:58:50.590377 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.590392 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.590405 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.590467 | orchestrator | 2025-04-17 01:58:50.590486 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-17 01:58:50.590499 | orchestrator | Thursday 17 April 2025 01:56:47 +0000 (0:00:00.500) 0:00:06.374 ******** 2025-04-17 01:58:50.590513 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.590526 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.590544 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.590570 | orchestrator | 2025-04-17 01:58:50.590595 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-17 01:58:50.590648 | orchestrator | Thursday 17 April 2025 01:56:48 +0000 (0:00:00.283) 0:00:06.658 ******** 2025-04-17 01:58:50.590676 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-17 01:58:50.590699 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:58:50.590725 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:58:50.590749 | orchestrator | 2025-04-17 01:58:50.590785 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-17 01:58:50.590803 | orchestrator | Thursday 17 April 2025 01:56:48 +0000 (0:00:00.674) 0:00:07.333 ******** 2025-04-17 01:58:50.590816 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.590830 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.590844 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.590857 | orchestrator | 2025-04-17 01:58:50.590870 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-17 01:58:50.590884 | orchestrator | Thursday 17 April 2025 01:56:49 +0000 (0:00:00.432) 0:00:07.765 ******** 2025-04-17 01:58:50.590915 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-17 01:58:50.590930 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:58:50.590944 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:58:50.590958 | orchestrator | 2025-04-17 01:58:50.590971 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-17 01:58:50.590985 | orchestrator | Thursday 17 April 2025 01:56:51 +0000 (0:00:02.278) 0:00:10.043 ******** 2025-04-17 01:58:50.590999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:58:50.591013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:58:50.591027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:58:50.591041 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591055 | orchestrator | 2025-04-17 01:58:50.591068 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-17 01:58:50.591082 | orchestrator | Thursday 17 April 2025 01:56:51 +0000 (0:00:00.474) 0:00:10.518 ******** 2025-04-17 01:58:50.591098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-17 01:58:50.591115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-17 01:58:50.591129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-17 01:58:50.591143 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591157 | orchestrator | 2025-04-17 01:58:50.591171 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-17 01:58:50.591185 | orchestrator | Thursday 17 April 2025 01:56:52 +0000 (0:00:00.701) 0:00:11.220 ******** 2025-04-17 01:58:50.591199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:58:50.591215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:58:50.591243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:58:50.591257 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591271 | orchestrator | 2025-04-17 01:58:50.591285 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-17 01:58:50.591298 | orchestrator | Thursday 17 April 2025 01:56:52 +0000 (0:00:00.178) 0:00:11.399 ******** 2025-04-17 01:58:50.591315 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '2610f60fc191', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-17 01:56:49.953296', 'end': '2025-04-17 01:56:49.984410', 'delta': '0:00:00.031114', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2610f60fc191'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-17 01:58:50.591346 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '2f25bd162154', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-17 01:56:50.537107', 'end': '2025-04-17 01:56:50.584884', 'delta': '0:00:00.047777', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2f25bd162154'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-17 01:58:50.591364 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'b07debf87bfa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-17 01:56:51.142265', 'end': '2025-04-17 01:56:51.181199', 'delta': '0:00:00.038934', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b07debf87bfa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-17 01:58:50.591379 | orchestrator | 2025-04-17 01:58:50.591393 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-17 01:58:50.591407 | orchestrator | Thursday 17 April 2025 01:56:53 +0000 (0:00:00.203) 0:00:11.602 ******** 2025-04-17 01:58:50.591476 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.591494 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.591507 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.591521 | orchestrator | 2025-04-17 01:58:50.591535 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-17 01:58:50.591549 | orchestrator | Thursday 17 April 2025 01:56:53 +0000 (0:00:00.474) 0:00:12.077 ******** 2025-04-17 01:58:50.591562 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-17 01:58:50.591584 | orchestrator | 2025-04-17 01:58:50.591598 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-17 01:58:50.591612 | orchestrator | Thursday 17 April 2025 01:56:55 +0000 (0:00:02.219) 0:00:14.296 ******** 2025-04-17 01:58:50.591625 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591639 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.591653 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.591667 | orchestrator | 2025-04-17 01:58:50.591680 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-17 01:58:50.591694 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:00.417) 0:00:14.714 ******** 2025-04-17 01:58:50.591707 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591721 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.591734 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.591748 | orchestrator | 2025-04-17 01:58:50.591762 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-17 01:58:50.591775 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:00.383) 0:00:15.097 ******** 2025-04-17 01:58:50.591789 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591802 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.591816 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.591830 | orchestrator | 2025-04-17 01:58:50.591843 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-17 01:58:50.591857 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:00.262) 0:00:15.360 ******** 2025-04-17 01:58:50.591870 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.591884 | orchestrator | 2025-04-17 01:58:50.591897 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-17 01:58:50.591911 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:00.107) 0:00:15.467 ******** 2025-04-17 01:58:50.591924 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591938 | orchestrator | 2025-04-17 01:58:50.591952 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-17 01:58:50.591965 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.213) 0:00:15.681 ******** 2025-04-17 01:58:50.591979 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.591993 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.592006 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.592020 | orchestrator | 2025-04-17 01:58:50.592034 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-17 01:58:50.592053 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.379) 0:00:16.061 ******** 2025-04-17 01:58:50.592067 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.592081 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.592094 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.592108 | orchestrator | 2025-04-17 01:58:50.592121 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-17 01:58:50.592135 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.266) 0:00:16.327 ******** 2025-04-17 01:58:50.592148 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.592162 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.592175 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.592189 | orchestrator | 2025-04-17 01:58:50.592202 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-17 01:58:50.592216 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.260) 0:00:16.587 ******** 2025-04-17 01:58:50.592230 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.592244 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.592264 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.592278 | orchestrator | 2025-04-17 01:58:50.592292 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-17 01:58:50.592306 | orchestrator | Thursday 17 April 2025 01:56:58 +0000 (0:00:00.303) 0:00:16.891 ******** 2025-04-17 01:58:50.592327 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.592340 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.592354 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.592367 | orchestrator | 2025-04-17 01:58:50.592381 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-17 01:58:50.592394 | orchestrator | Thursday 17 April 2025 01:56:58 +0000 (0:00:00.465) 0:00:17.356 ******** 2025-04-17 01:58:50.592408 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.592446 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.592461 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.592475 | orchestrator | 2025-04-17 01:58:50.592488 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-17 01:58:50.592502 | orchestrator | Thursday 17 April 2025 01:56:59 +0000 (0:00:00.268) 0:00:17.625 ******** 2025-04-17 01:58:50.592515 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.592529 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.592542 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.592556 | orchestrator | 2025-04-17 01:58:50.592570 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-17 01:58:50.592584 | orchestrator | Thursday 17 April 2025 01:56:59 +0000 (0:00:00.286) 0:00:17.911 ******** 2025-04-17 01:58:50.592599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--567181ad--d304--5248--b248--9710ecf6a56a-osd--block--567181ad--d304--5248--b248--9710ecf6a56a', 'dm-uuid-LVM-bYe2GR47CfdRAuGUgOfMJCJDLRAXMyAJ5b9vnqrLZL2VXm8ZnPhXXCnNOwWB1dXc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e-osd--block--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e', 'dm-uuid-LVM-i3z8oLrfZebl406dTMAr1ZlExlhhAWvWdVpDizjp8HwCqxskpQwu46wNtWrRFIVT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ebc25b0--9278--5fc8--8be4--afb201f0a343-osd--block--7ebc25b0--9278--5fc8--8be4--afb201f0a343', 'dm-uuid-LVM-UzeZzPzorXp8KV3DW3WIidSfgxphPTp03MVWvQM3d7Kpc9aah093ulKgJtTf1OuG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b69f2859--f86c--57c9--a956--28222694e166-osd--block--b69f2859--f86c--57c9--a956--28222694e166', 'dm-uuid-LVM-ruRqaKQFK07FwWdyfnJTHETcjjDvSVQQYZx1CjjAL9oVE1uSAQer6T9LEEzxFBKW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd404fa7-0e65-4fe9-9261-a6eb3d3ee9b6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.592855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--567181ad--d304--5248--b248--9710ecf6a56a-osd--block--567181ad--d304--5248--b248--9710ecf6a56a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2JyHJ1-wMiA-Ed3U-WaLw-D2q0-v5tm-57x2LE', 'scsi-0QEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0', 'scsi-SQEMU_QEMU_HARDDISK_8bcc068e-17b6-4e9f-accd-8ac12579d6f0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.592871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c-osd--block--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c', 'dm-uuid-LVM-hN8j80rAYArPOQzJtmZTfMCcfU0wqlndR6bBlKMNPZwYURJvSpmTzjUWOUUOBu34'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e-osd--block--6e7c2b16--a1dd--5b5d--909e--4c9aed3e0c7e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rc3fAq-eYxC-mU36-Y9MG-CY16-CbYv-DNVptp', 'scsi-0QEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb', 'scsi-SQEMU_QEMU_HARDDISK_e9224846-b1ba-4847-a73a-6715887089fb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.592944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242', 'scsi-SQEMU_QEMU_HARDDISK_0367f9b0-3a71-47a7-a8bd-9e2816c4d242'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.592959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af980f31--aa48--52cf--851d--a23b8b791ab9-osd--block--af980f31--aa48--52cf--851d--a23b8b791ab9', 'dm-uuid-LVM-2y1cF3yEf7AALsIhQz3m8JX59uQhbdUdsZdK4rydxBeHaLxZvXVM591c9REdgBjE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.592988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593056 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.593082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:58:50.593246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b208d6f-58a6-4bf8-9c3f-de2cbc4acb7a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part1', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part14', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part15', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part16', 'scsi-SQEMU_QEMU_HARDDISK_ddbfad14-a2f2-4e55-8a45-21cf9a4b4f96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c-osd--block--a9d35e4b--2444--59e0--b6b9--5664c21b8a9c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cKWHeS-W6Sl-A21i-85J5-4nrh-yTq8-iUzTQb', 'scsi-0QEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2', 'scsi-SQEMU_QEMU_HARDDISK_c4c813ed-e09b-49ac-b96f-625695efceb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ebc25b0--9278--5fc8--8be4--afb201f0a343-osd--block--7ebc25b0--9278--5fc8--8be4--afb201f0a343'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-h5ki4h-UqrN-BD4C-TJfA-l3w0-eYmg-VdJYdZ', 'scsi-0QEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908', 'scsi-SQEMU_QEMU_HARDDISK_bef8d693-736b-4549-b698-ce9e87082908'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b69f2859--f86c--57c9--a956--28222694e166-osd--block--b69f2859--f86c--57c9--a956--28222694e166'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MTQK23-nB2k-fmqi-vyOH-BZsi-PrZU-1UUzbJ', 'scsi-0QEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89', 'scsi-SQEMU_QEMU_HARDDISK_c189cae0-1e0d-4eb8-9970-e970e21b9a89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--af980f31--aa48--52cf--851d--a23b8b791ab9-osd--block--af980f31--aa48--52cf--851d--a23b8b791ab9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yig9HS-7KiD-1N40-fu03-cWxu-V2Qc-WlWhcg', 'scsi-0QEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650', 'scsi-SQEMU_QEMU_HARDDISK_6309ce49-a4ed-4da7-82b1-29aa79f26650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb', 'scsi-SQEMU_QEMU_HARDDISK_95e37f14-95e8-4165-b353-fd53fdf52cdb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700', 'scsi-SQEMU_QEMU_HARDDISK_42d2e0a2-f124-4e98-b4f2-6b7948e65700'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593415 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.593459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:58:50.593473 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.593487 | orchestrator | 2025-04-17 01:58:50.593501 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-17 01:58:50.593515 | orchestrator | Thursday 17 April 2025 01:56:59 +0000 (0:00:00.582) 0:00:18.493 ******** 2025-04-17 01:58:50.593534 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-17 01:58:50.593558 | orchestrator | 2025-04-17 01:58:50.593590 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-17 01:58:50.593624 | orchestrator | Thursday 17 April 2025 01:57:01 +0000 (0:00:01.482) 0:00:19.975 ******** 2025-04-17 01:58:50.593647 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.593672 | orchestrator | 2025-04-17 01:58:50.593697 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-17 01:58:50.593721 | orchestrator | Thursday 17 April 2025 01:57:01 +0000 (0:00:00.148) 0:00:20.124 ******** 2025-04-17 01:58:50.593743 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.593767 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.593791 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.593815 | orchestrator | 2025-04-17 01:58:50.593840 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-17 01:58:50.593865 | orchestrator | Thursday 17 April 2025 01:57:01 +0000 (0:00:00.374) 0:00:20.499 ******** 2025-04-17 01:58:50.593889 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.593913 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.593937 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.593977 | orchestrator | 2025-04-17 01:58:50.594003 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-17 01:58:50.594064 | orchestrator | Thursday 17 April 2025 01:57:02 +0000 (0:00:00.792) 0:00:21.291 ******** 2025-04-17 01:58:50.594079 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.594093 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.594106 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.594120 | orchestrator | 2025-04-17 01:58:50.594133 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-17 01:58:50.594147 | orchestrator | Thursday 17 April 2025 01:57:03 +0000 (0:00:00.319) 0:00:21.610 ******** 2025-04-17 01:58:50.594160 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.594174 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.594187 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.594201 | orchestrator | 2025-04-17 01:58:50.594214 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-17 01:58:50.594228 | orchestrator | Thursday 17 April 2025 01:57:03 +0000 (0:00:00.906) 0:00:22.517 ******** 2025-04-17 01:58:50.594241 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.594257 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.594270 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.594284 | orchestrator | 2025-04-17 01:58:50.594297 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-17 01:58:50.594311 | orchestrator | Thursday 17 April 2025 01:57:04 +0000 (0:00:00.299) 0:00:22.817 ******** 2025-04-17 01:58:50.594334 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.594358 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.594383 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.594409 | orchestrator | 2025-04-17 01:58:50.594462 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-17 01:58:50.594487 | orchestrator | Thursday 17 April 2025 01:57:04 +0000 (0:00:00.417) 0:00:23.234 ******** 2025-04-17 01:58:50.594512 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.594534 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.594559 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.594582 | orchestrator | 2025-04-17 01:58:50.594605 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-17 01:58:50.594641 | orchestrator | Thursday 17 April 2025 01:57:04 +0000 (0:00:00.335) 0:00:23.569 ******** 2025-04-17 01:58:50.594666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:58:50.594683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:58:50.594697 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:58:50.594711 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:58:50.594729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:58:50.594744 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.594758 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:58:50.594772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:58:50.594785 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:58:50.594799 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.594813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:58:50.594827 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.594851 | orchestrator | 2025-04-17 01:58:50.594874 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-17 01:58:50.594914 | orchestrator | Thursday 17 April 2025 01:57:06 +0000 (0:00:01.080) 0:00:24.650 ******** 2025-04-17 01:58:50.594940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:58:50.594964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:58:50.594988 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:58:50.595011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:58:50.595050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:58:50.595071 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.595086 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:58:50.595099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:58:50.595113 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.595127 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:58:50.595140 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:58:50.595154 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.595168 | orchestrator | 2025-04-17 01:58:50.595181 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-17 01:58:50.595195 | orchestrator | Thursday 17 April 2025 01:57:06 +0000 (0:00:00.663) 0:00:25.314 ******** 2025-04-17 01:58:50.595209 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-17 01:58:50.595223 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-17 01:58:50.595237 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-17 01:58:50.595251 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-17 01:58:50.595264 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-17 01:58:50.595278 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-17 01:58:50.595292 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-17 01:58:50.595306 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-17 01:58:50.595320 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-17 01:58:50.595334 | orchestrator | 2025-04-17 01:58:50.595348 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-17 01:58:50.595362 | orchestrator | Thursday 17 April 2025 01:57:08 +0000 (0:00:01.736) 0:00:27.050 ******** 2025-04-17 01:58:50.595375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:58:50.595389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:58:50.595403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:58:50.595416 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.595606 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:58:50.595624 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:58:50.595638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:58:50.595651 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.595665 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:58:50.595679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:58:50.595692 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:58:50.595706 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.595719 | orchestrator | 2025-04-17 01:58:50.595733 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-17 01:58:50.595745 | orchestrator | Thursday 17 April 2025 01:57:09 +0000 (0:00:00.592) 0:00:27.643 ******** 2025-04-17 01:58:50.595755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-17 01:58:50.595765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-17 01:58:50.595775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-17 01:58:50.595785 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.595795 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-17 01:58:50.595804 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-17 01:58:50.595814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-17 01:58:50.595824 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.595834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-17 01:58:50.595844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-17 01:58:50.595871 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-17 01:58:50.595881 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.595891 | orchestrator | 2025-04-17 01:58:50.595901 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-17 01:58:50.595911 | orchestrator | Thursday 17 April 2025 01:57:09 +0000 (0:00:00.427) 0:00:28.070 ******** 2025-04-17 01:58:50.595921 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:58:50.595931 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:58:50.595942 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:58:50.595952 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:58:50.595962 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:58:50.595972 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:58:50.595981 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.595992 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.596002 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-17 01:58:50.596022 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:58:50.596033 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:58:50.596043 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.596053 | orchestrator | 2025-04-17 01:58:50.596063 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-17 01:58:50.596073 | orchestrator | Thursday 17 April 2025 01:57:09 +0000 (0:00:00.329) 0:00:28.399 ******** 2025-04-17 01:58:50.596083 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-17 01:58:50.596093 | orchestrator | 2025-04-17 01:58:50.596103 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-17 01:58:50.596114 | orchestrator | Thursday 17 April 2025 01:57:10 +0000 (0:00:00.580) 0:00:28.979 ******** 2025-04-17 01:58:50.596124 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.596134 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.596144 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.596154 | orchestrator | 2025-04-17 01:58:50.596164 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-17 01:58:50.596174 | orchestrator | Thursday 17 April 2025 01:57:10 +0000 (0:00:00.271) 0:00:29.251 ******** 2025-04-17 01:58:50.596184 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.596194 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.596204 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.596213 | orchestrator | 2025-04-17 01:58:50.596223 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-17 01:58:50.596233 | orchestrator | Thursday 17 April 2025 01:57:10 +0000 (0:00:00.262) 0:00:29.513 ******** 2025-04-17 01:58:50.596243 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.596253 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.596263 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.596273 | orchestrator | 2025-04-17 01:58:50.596283 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-17 01:58:50.596293 | orchestrator | Thursday 17 April 2025 01:57:11 +0000 (0:00:00.373) 0:00:29.887 ******** 2025-04-17 01:58:50.596302 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.596312 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.596323 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.596345 | orchestrator | 2025-04-17 01:58:50.596355 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-17 01:58:50.596365 | orchestrator | Thursday 17 April 2025 01:57:11 +0000 (0:00:00.576) 0:00:30.463 ******** 2025-04-17 01:58:50.596375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:58:50.596385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:58:50.596395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:58:50.596404 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.596414 | orchestrator | 2025-04-17 01:58:50.596449 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-17 01:58:50.596460 | orchestrator | Thursday 17 April 2025 01:57:12 +0000 (0:00:00.374) 0:00:30.837 ******** 2025-04-17 01:58:50.596470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:58:50.596480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:58:50.596494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:58:50.596504 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.596514 | orchestrator | 2025-04-17 01:58:50.596524 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-17 01:58:50.596534 | orchestrator | Thursday 17 April 2025 01:57:12 +0000 (0:00:00.349) 0:00:31.187 ******** 2025-04-17 01:58:50.596544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:58:50.596554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:58:50.596564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:58:50.596574 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.596584 | orchestrator | 2025-04-17 01:58:50.596594 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:58:50.596605 | orchestrator | Thursday 17 April 2025 01:57:12 +0000 (0:00:00.331) 0:00:31.518 ******** 2025-04-17 01:58:50.596615 | orchestrator | ok: [testbed-node-3] 2025-04-17 01:58:50.596624 | orchestrator | ok: [testbed-node-4] 2025-04-17 01:58:50.596634 | orchestrator | ok: [testbed-node-5] 2025-04-17 01:58:50.596644 | orchestrator | 2025-04-17 01:58:50.596654 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-17 01:58:50.596664 | orchestrator | Thursday 17 April 2025 01:57:13 +0000 (0:00:00.274) 0:00:31.793 ******** 2025-04-17 01:58:50.596674 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-17 01:58:50.596873 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-17 01:58:50.596885 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-17 01:58:50.596895 | orchestrator | 2025-04-17 01:58:50.596905 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-17 01:58:50.596922 | orchestrator | Thursday 17 April 2025 01:57:13 +0000 (0:00:00.788) 0:00:32.581 ******** 2025-04-17 01:58:50.596933 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.596943 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.596953 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.596962 | orchestrator | 2025-04-17 01:58:50.596972 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-17 01:58:50.596982 | orchestrator | Thursday 17 April 2025 01:57:14 +0000 (0:00:00.420) 0:00:33.002 ******** 2025-04-17 01:58:50.596992 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.597002 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.597012 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.597022 | orchestrator | 2025-04-17 01:58:50.597032 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-17 01:58:50.597049 | orchestrator | Thursday 17 April 2025 01:57:14 +0000 (0:00:00.270) 0:00:33.273 ******** 2025-04-17 01:58:50.597059 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-17 01:58:50.597069 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.597079 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-17 01:58:50.597089 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.597106 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-17 01:58:50.597116 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.597126 | orchestrator | 2025-04-17 01:58:50.597136 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-17 01:58:50.597146 | orchestrator | Thursday 17 April 2025 01:57:15 +0000 (0:00:00.411) 0:00:33.685 ******** 2025-04-17 01:58:50.597156 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-17 01:58:50.597166 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.597177 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-17 01:58:50.597186 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.597196 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-17 01:58:50.597206 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.597217 | orchestrator | 2025-04-17 01:58:50.597227 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-17 01:58:50.597237 | orchestrator | Thursday 17 April 2025 01:57:15 +0000 (0:00:00.263) 0:00:33.948 ******** 2025-04-17 01:58:50.597247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-17 01:58:50.597257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-17 01:58:50.597267 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-17 01:58:50.597276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-17 01:58:50.597286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-17 01:58:50.597296 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.597306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-17 01:58:50.597317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-17 01:58:50.597326 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.597336 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-17 01:58:50.597346 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-17 01:58:50.597356 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.597366 | orchestrator | 2025-04-17 01:58:50.597376 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-17 01:58:50.597386 | orchestrator | Thursday 17 April 2025 01:57:16 +0000 (0:00:00.935) 0:00:34.884 ******** 2025-04-17 01:58:50.597396 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.597406 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.597416 | orchestrator | skipping: [testbed-node-5] 2025-04-17 01:58:50.597445 | orchestrator | 2025-04-17 01:58:50.597456 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-17 01:58:50.597466 | orchestrator | Thursday 17 April 2025 01:57:16 +0000 (0:00:00.330) 0:00:35.215 ******** 2025-04-17 01:58:50.597476 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-17 01:58:50.597486 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:58:50.597496 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:58:50.597506 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-17 01:58:50.597516 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-17 01:58:50.597526 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-17 01:58:50.597536 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-17 01:58:50.597546 | orchestrator | 2025-04-17 01:58:50.597556 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-17 01:58:50.597572 | orchestrator | Thursday 17 April 2025 01:57:17 +0000 (0:00:01.060) 0:00:36.275 ******** 2025-04-17 01:58:50.597582 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-17 01:58:50.597592 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:58:50.597602 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:58:50.597612 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-17 01:58:50.597622 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-17 01:58:50.597632 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-17 01:58:50.597642 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-17 01:58:50.597652 | orchestrator | 2025-04-17 01:58:50.597662 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-04-17 01:58:50.597672 | orchestrator | Thursday 17 April 2025 01:57:19 +0000 (0:00:01.798) 0:00:38.074 ******** 2025-04-17 01:58:50.597682 | orchestrator | skipping: [testbed-node-3] 2025-04-17 01:58:50.597692 | orchestrator | skipping: [testbed-node-4] 2025-04-17 01:58:50.597702 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-04-17 01:58:50.597711 | orchestrator | 2025-04-17 01:58:50.597722 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-04-17 01:58:50.597736 | orchestrator | Thursday 17 April 2025 01:57:19 +0000 (0:00:00.523) 0:00:38.598 ******** 2025-04-17 01:58:50.597748 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-17 01:58:50.597760 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-17 01:58:50.597770 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-17 01:58:50.597781 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-17 01:58:50.597791 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-17 01:58:50.597801 | orchestrator | 2025-04-17 01:58:50.597811 | orchestrator | TASK [generate keys] *********************************************************** 2025-04-17 01:58:50.597825 | orchestrator | Thursday 17 April 2025 01:58:02 +0000 (0:00:42.043) 0:01:20.641 ******** 2025-04-17 01:58:50.597836 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597846 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597856 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597866 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597876 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597902 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-04-17 01:58:50.597912 | orchestrator | 2025-04-17 01:58:50.597922 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-04-17 01:58:50.597932 | orchestrator | Thursday 17 April 2025 01:58:22 +0000 (0:00:20.040) 0:01:40.681 ******** 2025-04-17 01:58:50.597942 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597952 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597962 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597972 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597982 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.597992 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.598001 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-17 01:58:50.598011 | orchestrator | 2025-04-17 01:58:50.598051 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-04-17 01:58:50.598061 | orchestrator | Thursday 17 April 2025 01:58:32 +0000 (0:00:10.140) 0:01:50.822 ******** 2025-04-17 01:58:50.598071 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.598081 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-17 01:58:50.598091 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-17 01:58:50.598101 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.598112 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-17 01:58:50.598122 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-17 01:58:50.598131 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.598141 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-17 01:58:50.598151 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-17 01:58:50.598161 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:50.598171 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-17 01:58:50.598186 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-17 01:58:53.631097 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:53.632049 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-17 01:58:53.632092 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-17 01:58:53.632135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-17 01:58:53.632150 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-17 01:58:53.632165 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-17 01:58:53.632181 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-04-17 01:58:53.632196 | orchestrator | 2025-04-17 01:58:53.632210 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:58:53.632228 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-17 01:58:53.632244 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-04-17 01:58:53.632286 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-04-17 01:58:53.632301 | orchestrator | 2025-04-17 01:58:53.632315 | orchestrator | 2025-04-17 01:58:53.632329 | orchestrator | 2025-04-17 01:58:53.632343 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:58:53.632357 | orchestrator | Thursday 17 April 2025 01:58:49 +0000 (0:00:17.577) 0:02:08.399 ******** 2025-04-17 01:58:53.632371 | orchestrator | =============================================================================== 2025-04-17 01:58:53.632384 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.04s 2025-04-17 01:58:53.632398 | orchestrator | generate keys ---------------------------------------------------------- 20.04s 2025-04-17 01:58:53.632412 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.58s 2025-04-17 01:58:53.632482 | orchestrator | get keys from monitors ------------------------------------------------- 10.14s 2025-04-17 01:58:53.632506 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.28s 2025-04-17 01:58:53.632527 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 2.22s 2025-04-17 01:58:53.632542 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.80s 2025-04-17 01:58:53.632556 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.74s 2025-04-17 01:58:53.632570 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.48s 2025-04-17 01:58:53.632683 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.08s 2025-04-17 01:58:53.632701 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.06s 2025-04-17 01:58:53.632715 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.94s 2025-04-17 01:58:53.632729 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.91s 2025-04-17 01:58:53.632743 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.82s 2025-04-17 01:58:53.632757 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.81s 2025-04-17 01:58:53.632771 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.79s 2025-04-17 01:58:53.632784 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.79s 2025-04-17 01:58:53.632798 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.71s 2025-04-17 01:58:53.632813 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.70s 2025-04-17 01:58:53.632826 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-04-17 01:58:53.632840 | orchestrator | 2025-04-17 01:58:50 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:53.632856 | orchestrator | 2025-04-17 01:58:50 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:53.632892 | orchestrator | 2025-04-17 01:58:53 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:53.633652 | orchestrator | 2025-04-17 01:58:53 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:58:53.633681 | orchestrator | 2025-04-17 01:58:53 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:53.633701 | orchestrator | 2025-04-17 01:58:53 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:56.681476 | orchestrator | 2025-04-17 01:58:56 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:56.682187 | orchestrator | 2025-04-17 01:58:56 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:58:56.683270 | orchestrator | 2025-04-17 01:58:56 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:59.745220 | orchestrator | 2025-04-17 01:58:56 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:58:59.745377 | orchestrator | 2025-04-17 01:58:59 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:58:59.747027 | orchestrator | 2025-04-17 01:58:59 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:58:59.750357 | orchestrator | 2025-04-17 01:58:59 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:58:59.752203 | orchestrator | 2025-04-17 01:58:59 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:02.802487 | orchestrator | 2025-04-17 01:58:59 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:02.802707 | orchestrator | 2025-04-17 01:59:02 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:02.804088 | orchestrator | 2025-04-17 01:59:02 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:02.805775 | orchestrator | 2025-04-17 01:59:02 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:59:02.805832 | orchestrator | 2025-04-17 01:59:02 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:05.856475 | orchestrator | 2025-04-17 01:59:02 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:05.856609 | orchestrator | 2025-04-17 01:59:05 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:05.857937 | orchestrator | 2025-04-17 01:59:05 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:05.859257 | orchestrator | 2025-04-17 01:59:05 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:59:05.861454 | orchestrator | 2025-04-17 01:59:05 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:08.908324 | orchestrator | 2025-04-17 01:59:05 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:08.908443 | orchestrator | 2025-04-17 01:59:08 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:08.911996 | orchestrator | 2025-04-17 01:59:08 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:08.914863 | orchestrator | 2025-04-17 01:59:08 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:59:08.915802 | orchestrator | 2025-04-17 01:59:08 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:11.959127 | orchestrator | 2025-04-17 01:59:08 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:11.959277 | orchestrator | 2025-04-17 01:59:11 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:11.959823 | orchestrator | 2025-04-17 01:59:11 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:11.960694 | orchestrator | 2025-04-17 01:59:11 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state STARTED 2025-04-17 01:59:11.961884 | orchestrator | 2025-04-17 01:59:11 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:15.005614 | orchestrator | 2025-04-17 01:59:11 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:15.005758 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:15.007250 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:15.007287 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:15.009017 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:15.009050 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task 5620648b-2647-4a74-8756-690d83115c6d is in state SUCCESS 2025-04-17 01:59:15.013121 | orchestrator | 2025-04-17 01:59:15.013178 | orchestrator | 2025-04-17 01:59:15.013194 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-17 01:59:15.013208 | orchestrator | 2025-04-17 01:59:15.013615 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-17 01:59:15.013633 | orchestrator | Thursday 17 April 2025 01:56:53 +0000 (0:00:00.316) 0:00:00.316 ******** 2025-04-17 01:59:15.013648 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:15.013667 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:59:15.013682 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:59:15.013698 | orchestrator | 2025-04-17 01:59:15.013713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-17 01:59:15.013728 | orchestrator | Thursday 17 April 2025 01:56:53 +0000 (0:00:00.290) 0:00:00.607 ******** 2025-04-17 01:59:15.013743 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-17 01:59:15.013777 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-17 01:59:15.013792 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-17 01:59:15.013807 | orchestrator | 2025-04-17 01:59:15.013822 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-04-17 01:59:15.013837 | orchestrator | 2025-04-17 01:59:15.013852 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-17 01:59:15.013866 | orchestrator | Thursday 17 April 2025 01:56:53 +0000 (0:00:00.276) 0:00:00.884 ******** 2025-04-17 01:59:15.013881 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:59:15.013897 | orchestrator | 2025-04-17 01:59:15.013912 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-04-17 01:59:15.013927 | orchestrator | Thursday 17 April 2025 01:56:54 +0000 (0:00:00.598) 0:00:01.482 ******** 2025-04-17 01:59:15.013947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.013968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.014114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.014136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014242 | orchestrator | 2025-04-17 01:59:15.014259 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-04-17 01:59:15.014282 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:01.681) 0:00:03.164 ******** 2025-04-17 01:59:15.014299 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-04-17 01:59:15.014347 | orchestrator | 2025-04-17 01:59:15.014366 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-04-17 01:59:15.014382 | orchestrator | Thursday 17 April 2025 01:56:56 +0000 (0:00:00.586) 0:00:03.751 ******** 2025-04-17 01:59:15.014424 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:15.014441 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:59:15.014540 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:59:15.014557 | orchestrator | 2025-04-17 01:59:15.014571 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-04-17 01:59:15.014585 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.313) 0:00:04.064 ******** 2025-04-17 01:59:15.014712 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-17 01:59:15.014731 | orchestrator | 2025-04-17 01:59:15.014745 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-17 01:59:15.014759 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.346) 0:00:04.411 ******** 2025-04-17 01:59:15.014773 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:59:15.014786 | orchestrator | 2025-04-17 01:59:15.014800 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-04-17 01:59:15.014814 | orchestrator | Thursday 17 April 2025 01:56:57 +0000 (0:00:00.577) 0:00:04.988 ******** 2025-04-17 01:59:15.014829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.014856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.014880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.014897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.014994 | orchestrator | 2025-04-17 01:59:15.015008 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-04-17 01:59:15.015023 | orchestrator | Thursday 17 April 2025 01:57:00 +0000 (0:00:02.994) 0:00:07.982 ******** 2025-04-17 01:59:15.015046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:59:15.015062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.015077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:59:15.015099 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.015114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:59:15.015128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.015152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:59:15.015167 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.015182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:59:15.015197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.015288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:59:15.015308 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.015322 | orchestrator | 2025-04-17 01:59:15.015336 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-04-17 01:59:15.015350 | orchestrator | Thursday 17 April 2025 01:57:01 +0000 (0:00:00.929) 0:00:08.912 ******** 2025-04-17 01:59:15.015364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:59:15.015387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.015473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:59:15.015490 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.015505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:59:15.015530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.015545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:59:15.015559 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.015582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-17 01:59:15.015598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.015621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-17 01:59:15.015635 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.015649 | orchestrator | 2025-04-17 01:59:15.015663 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-04-17 01:59:15.015677 | orchestrator | Thursday 17 April 2025 01:57:03 +0000 (0:00:01.365) 0:00:10.277 ******** 2025-04-17 01:59:15.015692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.015707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.015730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.015753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.015768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.015783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.015797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.015812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.015833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.015847 | orchestrator | 2025-04-17 01:59:15.015862 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-04-17 01:59:15.015927 | orchestrator | Thursday 17 April 2025 01:57:06 +0000 (0:00:03.274) 0:00:13.552 ******** 2025-04-17 01:59:15.015946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.015964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.015982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.015998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.016020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.016046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.016155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.016169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.016182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.016194 | orchestrator | 2025-04-17 01:59:15.016207 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-04-17 01:59:15.016220 | orchestrator | Thursday 17 April 2025 01:57:13 +0000 (0:00:06.590) 0:00:20.142 ******** 2025-04-17 01:59:15.016232 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:59:15.016244 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.016257 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:59:15.016269 | orchestrator | 2025-04-17 01:59:15.016281 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-04-17 01:59:15.016294 | orchestrator | Thursday 17 April 2025 01:57:15 +0000 (0:00:02.284) 0:00:22.427 ******** 2025-04-17 01:59:15.016306 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.016318 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.016330 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.016350 | orchestrator | 2025-04-17 01:59:15.016369 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-04-17 01:59:15.016382 | orchestrator | Thursday 17 April 2025 01:57:16 +0000 (0:00:00.899) 0:00:23.326 ******** 2025-04-17 01:59:15.016395 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.016425 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.016438 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.016451 | orchestrator | 2025-04-17 01:59:15.016463 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-04-17 01:59:15.016476 | orchestrator | Thursday 17 April 2025 01:57:16 +0000 (0:00:00.640) 0:00:23.967 ******** 2025-04-17 01:59:15.016488 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.016518 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.016531 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.016544 | orchestrator | 2025-04-17 01:59:15.016562 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-04-17 01:59:15.016575 | orchestrator | Thursday 17 April 2025 01:57:17 +0000 (0:00:00.504) 0:00:24.471 ******** 2025-04-17 01:59:15.016589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.016603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.016617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.016630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.016658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.016673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-17 01:59:15.016686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.016699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.016712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.016731 | orchestrator | 2025-04-17 01:59:15.016743 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-17 01:59:15.016756 | orchestrator | Thursday 17 April 2025 01:57:19 +0000 (0:00:02.525) 0:00:26.996 ******** 2025-04-17 01:59:15.016769 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.016783 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.016797 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.016811 | orchestrator | 2025-04-17 01:59:15.016825 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-04-17 01:59:15.016839 | orchestrator | Thursday 17 April 2025 01:57:20 +0000 (0:00:00.456) 0:00:27.453 ******** 2025-04-17 01:59:15.016852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-17 01:59:15.016868 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-17 01:59:15.016886 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-17 01:59:15.016901 | orchestrator | 2025-04-17 01:59:15.016915 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-04-17 01:59:15.016929 | orchestrator | Thursday 17 April 2025 01:57:22 +0000 (0:00:02.004) 0:00:29.458 ******** 2025-04-17 01:59:15.016943 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-17 01:59:15.016956 | orchestrator | 2025-04-17 01:59:15.016970 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-04-17 01:59:15.016983 | orchestrator | Thursday 17 April 2025 01:57:22 +0000 (0:00:00.566) 0:00:30.024 ******** 2025-04-17 01:59:15.016997 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.017011 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.017025 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.017039 | orchestrator | 2025-04-17 01:59:15.017054 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-04-17 01:59:15.017068 | orchestrator | Thursday 17 April 2025 01:57:23 +0000 (0:00:00.870) 0:00:30.894 ******** 2025-04-17 01:59:15.017082 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-17 01:59:15.017096 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-17 01:59:15.017109 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-17 01:59:15.017123 | orchestrator | 2025-04-17 01:59:15.017137 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-04-17 01:59:15.017149 | orchestrator | Thursday 17 April 2025 01:57:24 +0000 (0:00:00.949) 0:00:31.844 ******** 2025-04-17 01:59:15.017161 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:15.017173 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:59:15.017186 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:59:15.017198 | orchestrator | 2025-04-17 01:59:15.017210 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-04-17 01:59:15.017222 | orchestrator | Thursday 17 April 2025 01:57:25 +0000 (0:00:00.333) 0:00:32.177 ******** 2025-04-17 01:59:15.017234 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-17 01:59:15.017246 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-17 01:59:15.017258 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-17 01:59:15.017270 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-17 01:59:15.017282 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-17 01:59:15.017294 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-17 01:59:15.017307 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-17 01:59:15.017319 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-17 01:59:15.017343 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-17 01:59:15.017356 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-17 01:59:15.017368 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-17 01:59:15.017380 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-17 01:59:15.017392 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-17 01:59:15.017419 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-17 01:59:15.017431 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-17 01:59:15.017444 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-17 01:59:15.017456 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-17 01:59:15.017468 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-17 01:59:15.017480 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-17 01:59:15.017493 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-17 01:59:15.017505 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-17 01:59:15.017517 | orchestrator | 2025-04-17 01:59:15.017529 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-04-17 01:59:15.017541 | orchestrator | Thursday 17 April 2025 01:57:35 +0000 (0:00:10.033) 0:00:42.211 ******** 2025-04-17 01:59:15.017553 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-17 01:59:15.017565 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-17 01:59:15.017577 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-17 01:59:15.017590 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-17 01:59:15.017602 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-17 01:59:15.017619 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-17 01:59:15.017632 | orchestrator | 2025-04-17 01:59:15.017644 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-04-17 01:59:15.017656 | orchestrator | Thursday 17 April 2025 01:57:38 +0000 (0:00:03.199) 0:00:45.410 ******** 2025-04-17 01:59:15.017669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.017683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.017704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-17 01:59:15.017718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.017744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.017758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-17 01:59:15.017771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.017790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.017804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-17 01:59:15.017816 | orchestrator | 2025-04-17 01:59:15.017828 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-17 01:59:15.017841 | orchestrator | Thursday 17 April 2025 01:57:41 +0000 (0:00:02.717) 0:00:48.127 ******** 2025-04-17 01:59:15.017853 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.017865 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.017877 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.017889 | orchestrator | 2025-04-17 01:59:15.017902 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-04-17 01:59:15.017914 | orchestrator | Thursday 17 April 2025 01:57:41 +0000 (0:00:00.255) 0:00:48.383 ******** 2025-04-17 01:59:15.017926 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.017939 | orchestrator | 2025-04-17 01:59:15.017950 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-04-17 01:59:15.017962 | orchestrator | Thursday 17 April 2025 01:57:43 +0000 (0:00:02.415) 0:00:50.798 ******** 2025-04-17 01:59:15.017974 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.017987 | orchestrator | 2025-04-17 01:59:15.017999 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-04-17 01:59:15.018011 | orchestrator | Thursday 17 April 2025 01:57:45 +0000 (0:00:02.199) 0:00:52.998 ******** 2025-04-17 01:59:15.018056 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:59:15.018069 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:15.018081 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:59:15.018094 | orchestrator | 2025-04-17 01:59:15.018106 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-04-17 01:59:15.018118 | orchestrator | Thursday 17 April 2025 01:57:46 +0000 (0:00:00.882) 0:00:53.880 ******** 2025-04-17 01:59:15.018131 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:15.018149 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:59:15.018162 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:59:15.018174 | orchestrator | 2025-04-17 01:59:15.018186 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-04-17 01:59:15.018199 | orchestrator | Thursday 17 April 2025 01:57:47 +0000 (0:00:00.295) 0:00:54.175 ******** 2025-04-17 01:59:15.018218 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.018231 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.018243 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.018255 | orchestrator | 2025-04-17 01:59:15.018267 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-04-17 01:59:15.018279 | orchestrator | Thursday 17 April 2025 01:57:47 +0000 (0:00:00.548) 0:00:54.724 ******** 2025-04-17 01:59:15.018291 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.018304 | orchestrator | 2025-04-17 01:59:15.018316 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-04-17 01:59:15.018328 | orchestrator | Thursday 17 April 2025 01:58:00 +0000 (0:00:12.567) 0:01:07.291 ******** 2025-04-17 01:59:15.018340 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.018352 | orchestrator | 2025-04-17 01:59:15.018364 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-17 01:59:15.018377 | orchestrator | Thursday 17 April 2025 01:58:08 +0000 (0:00:08.679) 0:01:15.971 ******** 2025-04-17 01:59:15.018389 | orchestrator | 2025-04-17 01:59:15.018415 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-17 01:59:15.018427 | orchestrator | Thursday 17 April 2025 01:58:08 +0000 (0:00:00.055) 0:01:16.026 ******** 2025-04-17 01:59:15.018440 | orchestrator | 2025-04-17 01:59:15.018452 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-17 01:59:15.018465 | orchestrator | Thursday 17 April 2025 01:58:09 +0000 (0:00:00.052) 0:01:16.079 ******** 2025-04-17 01:59:15.018477 | orchestrator | 2025-04-17 01:59:15.018489 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-04-17 01:59:15.018501 | orchestrator | Thursday 17 April 2025 01:58:09 +0000 (0:00:00.054) 0:01:16.134 ******** 2025-04-17 01:59:15.018514 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.018526 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:59:15.018538 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:59:15.018550 | orchestrator | 2025-04-17 01:59:15.018562 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-04-17 01:59:15.018574 | orchestrator | Thursday 17 April 2025 01:58:21 +0000 (0:00:12.529) 0:01:28.663 ******** 2025-04-17 01:59:15.018586 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.018598 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:59:15.018611 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:59:15.018623 | orchestrator | 2025-04-17 01:59:15.018635 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-04-17 01:59:15.018647 | orchestrator | Thursday 17 April 2025 01:58:26 +0000 (0:00:04.643) 0:01:33.307 ******** 2025-04-17 01:59:15.018659 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.018671 | orchestrator | changed: [testbed-node-1] 2025-04-17 01:59:15.018684 | orchestrator | changed: [testbed-node-2] 2025-04-17 01:59:15.018696 | orchestrator | 2025-04-17 01:59:15.018708 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-17 01:59:15.018720 | orchestrator | Thursday 17 April 2025 01:58:31 +0000 (0:00:05.057) 0:01:38.365 ******** 2025-04-17 01:59:15.018739 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-17 01:59:15.018752 | orchestrator | 2025-04-17 01:59:15.018764 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-04-17 01:59:15.018776 | orchestrator | Thursday 17 April 2025 01:58:32 +0000 (0:00:00.732) 0:01:39.098 ******** 2025-04-17 01:59:15.018788 | orchestrator | ok: [testbed-node-1] 2025-04-17 01:59:15.018801 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:15.018813 | orchestrator | ok: [testbed-node-2] 2025-04-17 01:59:15.018825 | orchestrator | 2025-04-17 01:59:15.018842 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-04-17 01:59:15.018855 | orchestrator | Thursday 17 April 2025 01:58:33 +0000 (0:00:01.020) 0:01:40.118 ******** 2025-04-17 01:59:15.018867 | orchestrator | changed: [testbed-node-0] 2025-04-17 01:59:15.018879 | orchestrator | 2025-04-17 01:59:15.018898 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-04-17 01:59:15.018911 | orchestrator | Thursday 17 April 2025 01:58:34 +0000 (0:00:01.492) 0:01:41.611 ******** 2025-04-17 01:59:15.018923 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-04-17 01:59:15.018935 | orchestrator | 2025-04-17 01:59:15.018948 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-04-17 01:59:15.018960 | orchestrator | Thursday 17 April 2025 01:58:43 +0000 (0:00:09.233) 0:01:50.844 ******** 2025-04-17 01:59:15.018972 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-04-17 01:59:15.018984 | orchestrator | 2025-04-17 01:59:15.018996 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-04-17 01:59:15.019009 | orchestrator | Thursday 17 April 2025 01:59:02 +0000 (0:00:18.735) 0:02:09.580 ******** 2025-04-17 01:59:15.019021 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-04-17 01:59:15.019033 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-04-17 01:59:15.019045 | orchestrator | 2025-04-17 01:59:15.019057 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-04-17 01:59:15.019069 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:06.658) 0:02:16.239 ******** 2025-04-17 01:59:15.019081 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.019094 | orchestrator | 2025-04-17 01:59:15.019106 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-04-17 01:59:15.019118 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.100) 0:02:16.339 ******** 2025-04-17 01:59:15.019130 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.019142 | orchestrator | 2025-04-17 01:59:15.019154 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-04-17 01:59:15.019173 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.115) 0:02:16.455 ******** 2025-04-17 01:59:15.022157 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.022185 | orchestrator | 2025-04-17 01:59:15.022196 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-04-17 01:59:15.022208 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.110) 0:02:16.566 ******** 2025-04-17 01:59:15.022219 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.022230 | orchestrator | 2025-04-17 01:59:15.022241 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-04-17 01:59:15.022252 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.389) 0:02:16.955 ******** 2025-04-17 01:59:15.022263 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:15.022275 | orchestrator | 2025-04-17 01:59:15.022286 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-17 01:59:15.022297 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:03.158) 0:02:20.114 ******** 2025-04-17 01:59:15.022307 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:15.022327 | orchestrator | skipping: [testbed-node-1] 2025-04-17 01:59:15.022338 | orchestrator | skipping: [testbed-node-2] 2025-04-17 01:59:15.022349 | orchestrator | 2025-04-17 01:59:15.022360 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:59:15.022371 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-17 01:59:15.022383 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-17 01:59:15.022394 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-17 01:59:15.022422 | orchestrator | 2025-04-17 01:59:15.022434 | orchestrator | 2025-04-17 01:59:15.022445 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:59:15.022456 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:00.512) 0:02:20.626 ******** 2025-04-17 01:59:15.022477 | orchestrator | =============================================================================== 2025-04-17 01:59:15.022488 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.74s 2025-04-17 01:59:15.022499 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.57s 2025-04-17 01:59:15.022511 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 12.53s 2025-04-17 01:59:15.022522 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.03s 2025-04-17 01:59:15.022533 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.23s 2025-04-17 01:59:15.022544 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.68s 2025-04-17 01:59:15.022555 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.66s 2025-04-17 01:59:15.022566 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.59s 2025-04-17 01:59:15.022586 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.06s 2025-04-17 01:59:15.022598 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.64s 2025-04-17 01:59:15.022609 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.27s 2025-04-17 01:59:15.022620 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.20s 2025-04-17 01:59:15.022631 | orchestrator | keystone : Creating default user role ----------------------------------- 3.16s 2025-04-17 01:59:15.022642 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.99s 2025-04-17 01:59:15.022653 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.72s 2025-04-17 01:59:15.022664 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.53s 2025-04-17 01:59:15.022675 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2025-04-17 01:59:15.022686 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.28s 2025-04-17 01:59:15.022697 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.20s 2025-04-17 01:59:15.022708 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.00s 2025-04-17 01:59:15.022719 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:15.022730 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:15.022747 | orchestrator | 2025-04-17 01:59:15 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:18.073257 | orchestrator | 2025-04-17 01:59:15 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:18.073453 | orchestrator | 2025-04-17 01:59:18 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:18.074741 | orchestrator | 2025-04-17 01:59:18 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:18.076887 | orchestrator | 2025-04-17 01:59:18 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:18.077625 | orchestrator | 2025-04-17 01:59:18 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:18.078482 | orchestrator | 2025-04-17 01:59:18 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:18.079305 | orchestrator | 2025-04-17 01:59:18 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:18.081914 | orchestrator | 2025-04-17 01:59:18 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:21.118436 | orchestrator | 2025-04-17 01:59:18 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:21.118571 | orchestrator | 2025-04-17 01:59:21 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:21.121997 | orchestrator | 2025-04-17 01:59:21 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:21.122580 | orchestrator | 2025-04-17 01:59:21 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:21.123216 | orchestrator | 2025-04-17 01:59:21 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:21.124105 | orchestrator | 2025-04-17 01:59:21 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:21.124993 | orchestrator | 2025-04-17 01:59:21 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:21.126160 | orchestrator | 2025-04-17 01:59:21 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:24.159718 | orchestrator | 2025-04-17 01:59:21 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:24.159863 | orchestrator | 2025-04-17 01:59:24 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:24.161848 | orchestrator | 2025-04-17 01:59:24 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:24.163359 | orchestrator | 2025-04-17 01:59:24 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:24.165255 | orchestrator | 2025-04-17 01:59:24 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:24.166651 | orchestrator | 2025-04-17 01:59:24 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:24.167963 | orchestrator | 2025-04-17 01:59:24 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:24.169350 | orchestrator | 2025-04-17 01:59:24 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:27.220637 | orchestrator | 2025-04-17 01:59:24 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:27.220786 | orchestrator | 2025-04-17 01:59:27 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:27.229601 | orchestrator | 2025-04-17 01:59:27 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:27.231416 | orchestrator | 2025-04-17 01:59:27 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:27.231451 | orchestrator | 2025-04-17 01:59:27 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:27.233017 | orchestrator | 2025-04-17 01:59:27 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state STARTED 2025-04-17 01:59:27.235295 | orchestrator | 2025-04-17 01:59:27 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:27.237563 | orchestrator | 2025-04-17 01:59:27 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:30.286256 | orchestrator | 2025-04-17 01:59:27 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:30.286481 | orchestrator | 2025-04-17 01:59:30 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:30.287141 | orchestrator | 2025-04-17 01:59:30 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:30.288323 | orchestrator | 2025-04-17 01:59:30 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:30.289229 | orchestrator | 2025-04-17 01:59:30 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:30.290305 | orchestrator | 2025-04-17 01:59:30 | INFO  | Task 4ab72e63-03fc-49b3-b222-8b258eb1c9bb is in state SUCCESS 2025-04-17 01:59:30.292050 | orchestrator | 2025-04-17 01:59:30.292090 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-17 01:59:30.292106 | orchestrator | 2025-04-17 01:59:30.292120 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-04-17 01:59:30.292135 | orchestrator | 2025-04-17 01:59:30.292150 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-17 01:59:30.292164 | orchestrator | Thursday 17 April 2025 01:59:01 +0000 (0:00:00.448) 0:00:00.448 ******** 2025-04-17 01:59:30.292178 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-04-17 01:59:30.292194 | orchestrator | 2025-04-17 01:59:30.292208 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-17 01:59:30.292222 | orchestrator | Thursday 17 April 2025 01:59:01 +0000 (0:00:00.211) 0:00:00.660 ******** 2025-04-17 01:59:30.292236 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:59:30.292251 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-17 01:59:30.292264 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-17 01:59:30.292278 | orchestrator | 2025-04-17 01:59:30.292292 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-17 01:59:30.292306 | orchestrator | Thursday 17 April 2025 01:59:02 +0000 (0:00:00.948) 0:00:01.609 ******** 2025-04-17 01:59:30.292320 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-04-17 01:59:30.292333 | orchestrator | 2025-04-17 01:59:30.292347 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-17 01:59:30.292361 | orchestrator | Thursday 17 April 2025 01:59:03 +0000 (0:00:00.235) 0:00:01.845 ******** 2025-04-17 01:59:30.292375 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.292423 | orchestrator | 2025-04-17 01:59:30.292438 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-17 01:59:30.292452 | orchestrator | Thursday 17 April 2025 01:59:03 +0000 (0:00:00.617) 0:00:02.462 ******** 2025-04-17 01:59:30.292465 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.292480 | orchestrator | 2025-04-17 01:59:30.292493 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-17 01:59:30.292507 | orchestrator | Thursday 17 April 2025 01:59:03 +0000 (0:00:00.126) 0:00:02.589 ******** 2025-04-17 01:59:30.292521 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.292535 | orchestrator | 2025-04-17 01:59:30.292549 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-17 01:59:30.292563 | orchestrator | Thursday 17 April 2025 01:59:04 +0000 (0:00:00.445) 0:00:03.035 ******** 2025-04-17 01:59:30.292726 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.292746 | orchestrator | 2025-04-17 01:59:30.292762 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-17 01:59:30.292777 | orchestrator | Thursday 17 April 2025 01:59:04 +0000 (0:00:00.146) 0:00:03.181 ******** 2025-04-17 01:59:30.292794 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.292810 | orchestrator | 2025-04-17 01:59:30.292841 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-17 01:59:30.292858 | orchestrator | Thursday 17 April 2025 01:59:04 +0000 (0:00:00.136) 0:00:03.318 ******** 2025-04-17 01:59:30.292873 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.292888 | orchestrator | 2025-04-17 01:59:30.292903 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-17 01:59:30.292919 | orchestrator | Thursday 17 April 2025 01:59:04 +0000 (0:00:00.143) 0:00:03.461 ******** 2025-04-17 01:59:30.292935 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.292950 | orchestrator | 2025-04-17 01:59:30.292966 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-17 01:59:30.292982 | orchestrator | Thursday 17 April 2025 01:59:04 +0000 (0:00:00.142) 0:00:03.604 ******** 2025-04-17 01:59:30.293012 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.293026 | orchestrator | 2025-04-17 01:59:30.293040 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-17 01:59:30.293054 | orchestrator | Thursday 17 April 2025 01:59:05 +0000 (0:00:00.276) 0:00:03.881 ******** 2025-04-17 01:59:30.293068 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:59:30.293082 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:59:30.293095 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:59:30.293109 | orchestrator | 2025-04-17 01:59:30.293123 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-17 01:59:30.293137 | orchestrator | Thursday 17 April 2025 01:59:05 +0000 (0:00:00.744) 0:00:04.625 ******** 2025-04-17 01:59:30.293151 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.293165 | orchestrator | 2025-04-17 01:59:30.293179 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-17 01:59:30.293192 | orchestrator | Thursday 17 April 2025 01:59:06 +0000 (0:00:00.267) 0:00:04.893 ******** 2025-04-17 01:59:30.293206 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:59:30.293220 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:59:30.293234 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:59:30.293248 | orchestrator | 2025-04-17 01:59:30.293262 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-17 01:59:30.293276 | orchestrator | Thursday 17 April 2025 01:59:08 +0000 (0:00:01.864) 0:00:06.757 ******** 2025-04-17 01:59:30.293290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:59:30.293303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:59:30.293317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:59:30.293331 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.293345 | orchestrator | 2025-04-17 01:59:30.293359 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-17 01:59:30.293384 | orchestrator | Thursday 17 April 2025 01:59:08 +0000 (0:00:00.409) 0:00:07.167 ******** 2025-04-17 01:59:30.293431 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-17 01:59:30.293448 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-17 01:59:30.293462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-17 01:59:30.293476 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.293490 | orchestrator | 2025-04-17 01:59:30.293504 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-17 01:59:30.293518 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.784) 0:00:07.952 ******** 2025-04-17 01:59:30.293533 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:59:30.293548 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:59:30.293571 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-17 01:59:30.293585 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.293599 | orchestrator | 2025-04-17 01:59:30.293613 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-17 01:59:30.293627 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.157) 0:00:08.109 ******** 2025-04-17 01:59:30.293644 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '2610f60fc191', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-17 01:59:06.835841', 'end': '2025-04-17 01:59:06.880139', 'delta': '0:00:00.044298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2610f60fc191'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-17 01:59:30.293662 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '2f25bd162154', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-17 01:59:07.360432', 'end': '2025-04-17 01:59:07.401888', 'delta': '0:00:00.041456', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2f25bd162154'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-17 01:59:30.293687 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'b07debf87bfa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-17 01:59:07.900207', 'end': '2025-04-17 01:59:07.944662', 'delta': '0:00:00.044455', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b07debf87bfa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-17 01:59:30.293702 | orchestrator | 2025-04-17 01:59:30.293717 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-17 01:59:30.293731 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.178) 0:00:08.288 ******** 2025-04-17 01:59:30.293744 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.293758 | orchestrator | 2025-04-17 01:59:30.293772 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-17 01:59:30.293785 | orchestrator | Thursday 17 April 2025 01:59:09 +0000 (0:00:00.236) 0:00:08.524 ******** 2025-04-17 01:59:30.293799 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-04-17 01:59:30.293820 | orchestrator | 2025-04-17 01:59:30.293834 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-17 01:59:30.293847 | orchestrator | Thursday 17 April 2025 01:59:11 +0000 (0:00:01.513) 0:00:10.038 ******** 2025-04-17 01:59:30.293861 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.293874 | orchestrator | 2025-04-17 01:59:30.293888 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-17 01:59:30.293902 | orchestrator | Thursday 17 April 2025 01:59:11 +0000 (0:00:00.148) 0:00:10.186 ******** 2025-04-17 01:59:30.293915 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.293929 | orchestrator | 2025-04-17 01:59:30.293948 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-17 01:59:30.293962 | orchestrator | Thursday 17 April 2025 01:59:11 +0000 (0:00:00.210) 0:00:10.397 ******** 2025-04-17 01:59:30.293976 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.293989 | orchestrator | 2025-04-17 01:59:30.294003 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-17 01:59:30.294087 | orchestrator | Thursday 17 April 2025 01:59:11 +0000 (0:00:00.137) 0:00:10.534 ******** 2025-04-17 01:59:30.294107 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.294122 | orchestrator | 2025-04-17 01:59:30.294135 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-17 01:59:30.294150 | orchestrator | Thursday 17 April 2025 01:59:11 +0000 (0:00:00.139) 0:00:10.674 ******** 2025-04-17 01:59:30.294163 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294177 | orchestrator | 2025-04-17 01:59:30.294191 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-17 01:59:30.294205 | orchestrator | Thursday 17 April 2025 01:59:12 +0000 (0:00:00.220) 0:00:10.894 ******** 2025-04-17 01:59:30.294218 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294232 | orchestrator | 2025-04-17 01:59:30.294246 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-17 01:59:30.294260 | orchestrator | Thursday 17 April 2025 01:59:12 +0000 (0:00:00.133) 0:00:11.028 ******** 2025-04-17 01:59:30.294274 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294287 | orchestrator | 2025-04-17 01:59:30.294301 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-17 01:59:30.294315 | orchestrator | Thursday 17 April 2025 01:59:12 +0000 (0:00:00.124) 0:00:11.152 ******** 2025-04-17 01:59:30.294329 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294343 | orchestrator | 2025-04-17 01:59:30.294357 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-17 01:59:30.294370 | orchestrator | Thursday 17 April 2025 01:59:12 +0000 (0:00:00.136) 0:00:11.289 ******** 2025-04-17 01:59:30.294401 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294416 | orchestrator | 2025-04-17 01:59:30.294430 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-17 01:59:30.294444 | orchestrator | Thursday 17 April 2025 01:59:12 +0000 (0:00:00.113) 0:00:11.402 ******** 2025-04-17 01:59:30.294458 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294472 | orchestrator | 2025-04-17 01:59:30.294485 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-17 01:59:30.294499 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:00.334) 0:00:11.737 ******** 2025-04-17 01:59:30.294513 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294527 | orchestrator | 2025-04-17 01:59:30.294540 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-17 01:59:30.294554 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:00.135) 0:00:11.873 ******** 2025-04-17 01:59:30.294568 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294581 | orchestrator | 2025-04-17 01:59:30.294595 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-17 01:59:30.294609 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:00.128) 0:00:12.001 ******** 2025-04-17 01:59:30.294623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-17 01:59:30.294775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ccd186f-c2c3-4fc5-a7e7-da9be8ad3fca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:59:30.294799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e4fe9eb-5e43-4aa2-9b37-d2398fe01f7b', 'scsi-SQEMU_QEMU_HARDDISK_6e4fe9eb-5e43-4aa2-9b37-d2398fe01f7b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:59:30.294816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29eb77c3-a4eb-47de-bcfc-90cea0292ee8', 'scsi-SQEMU_QEMU_HARDDISK_29eb77c3-a4eb-47de-bcfc-90cea0292ee8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:59:30.294830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d7eac16-cb9a-452c-8088-f21cbc7102b1', 'scsi-SQEMU_QEMU_HARDDISK_7d7eac16-cb9a-452c-8088-f21cbc7102b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:59:30.294845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-17-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-17 01:59:30.294867 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294881 | orchestrator | 2025-04-17 01:59:30.294895 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-17 01:59:30.294909 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:00.284) 0:00:12.286 ******** 2025-04-17 01:59:30.294923 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294936 | orchestrator | 2025-04-17 01:59:30.294950 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-17 01:59:30.294964 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:00.254) 0:00:12.540 ******** 2025-04-17 01:59:30.294977 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.294991 | orchestrator | 2025-04-17 01:59:30.295005 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-17 01:59:30.295018 | orchestrator | Thursday 17 April 2025 01:59:13 +0000 (0:00:00.121) 0:00:12.662 ******** 2025-04-17 01:59:30.295032 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.295046 | orchestrator | 2025-04-17 01:59:30.295059 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-17 01:59:30.295073 | orchestrator | Thursday 17 April 2025 01:59:14 +0000 (0:00:00.162) 0:00:12.824 ******** 2025-04-17 01:59:30.295093 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.295107 | orchestrator | 2025-04-17 01:59:30.295121 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-17 01:59:30.295134 | orchestrator | Thursday 17 April 2025 01:59:14 +0000 (0:00:00.504) 0:00:13.328 ******** 2025-04-17 01:59:30.295148 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.295162 | orchestrator | 2025-04-17 01:59:30.295181 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-17 01:59:30.295195 | orchestrator | Thursday 17 April 2025 01:59:14 +0000 (0:00:00.120) 0:00:13.449 ******** 2025-04-17 01:59:30.295209 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.295223 | orchestrator | 2025-04-17 01:59:30.295237 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-17 01:59:30.295250 | orchestrator | Thursday 17 April 2025 01:59:16 +0000 (0:00:01.472) 0:00:14.922 ******** 2025-04-17 01:59:30.295264 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.295278 | orchestrator | 2025-04-17 01:59:30.295292 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-17 01:59:30.295305 | orchestrator | Thursday 17 April 2025 01:59:16 +0000 (0:00:00.360) 0:00:15.282 ******** 2025-04-17 01:59:30.295319 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.295333 | orchestrator | 2025-04-17 01:59:30.295346 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-17 01:59:30.295360 | orchestrator | Thursday 17 April 2025 01:59:16 +0000 (0:00:00.253) 0:00:15.536 ******** 2025-04-17 01:59:30.295373 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.295440 | orchestrator | 2025-04-17 01:59:30.295456 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-17 01:59:30.295470 | orchestrator | Thursday 17 April 2025 01:59:16 +0000 (0:00:00.143) 0:00:15.679 ******** 2025-04-17 01:59:30.295484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:59:30.295498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:59:30.295512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:59:30.295525 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.295539 | orchestrator | 2025-04-17 01:59:30.295553 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-17 01:59:30.295567 | orchestrator | Thursday 17 April 2025 01:59:17 +0000 (0:00:00.539) 0:00:16.219 ******** 2025-04-17 01:59:30.295580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:59:30.295595 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:59:30.295608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:59:30.295631 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.295645 | orchestrator | 2025-04-17 01:59:30.295659 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-17 01:59:30.295673 | orchestrator | Thursday 17 April 2025 01:59:17 +0000 (0:00:00.463) 0:00:16.683 ******** 2025-04-17 01:59:30.295687 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:59:30.295701 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-17 01:59:30.295715 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-17 01:59:30.295728 | orchestrator | 2025-04-17 01:59:30.295742 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-17 01:59:30.295756 | orchestrator | Thursday 17 April 2025 01:59:19 +0000 (0:00:01.087) 0:00:17.771 ******** 2025-04-17 01:59:30.295770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:59:30.295783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:59:30.295797 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:59:30.295811 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.295825 | orchestrator | 2025-04-17 01:59:30.295839 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-17 01:59:30.295853 | orchestrator | Thursday 17 April 2025 01:59:19 +0000 (0:00:00.195) 0:00:17.966 ******** 2025-04-17 01:59:30.295867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-17 01:59:30.295881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-17 01:59:30.295895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-17 01:59:30.295908 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.295922 | orchestrator | 2025-04-17 01:59:30.295935 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-17 01:59:30.295949 | orchestrator | Thursday 17 April 2025 01:59:19 +0000 (0:00:00.215) 0:00:18.182 ******** 2025-04-17 01:59:30.295962 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-17 01:59:30.295974 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-17 01:59:30.295986 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-17 01:59:30.295999 | orchestrator | 2025-04-17 01:59:30.296011 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-17 01:59:30.296023 | orchestrator | Thursday 17 April 2025 01:59:19 +0000 (0:00:00.162) 0:00:18.344 ******** 2025-04-17 01:59:30.296035 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.296047 | orchestrator | 2025-04-17 01:59:30.296060 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-17 01:59:30.296072 | orchestrator | Thursday 17 April 2025 01:59:19 +0000 (0:00:00.221) 0:00:18.565 ******** 2025-04-17 01:59:30.296084 | orchestrator | skipping: [testbed-node-0] 2025-04-17 01:59:30.296097 | orchestrator | 2025-04-17 01:59:30.296109 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-17 01:59:30.296121 | orchestrator | Thursday 17 April 2025 01:59:19 +0000 (0:00:00.108) 0:00:18.674 ******** 2025-04-17 01:59:30.296133 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:59:30.296151 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:59:30.296164 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:59:30.296176 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-17 01:59:30.296189 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-17 01:59:30.296201 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-17 01:59:30.296218 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-17 01:59:30.296231 | orchestrator | 2025-04-17 01:59:30.296255 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-17 01:59:30.296268 | orchestrator | Thursday 17 April 2025 01:59:20 +0000 (0:00:00.725) 0:00:19.399 ******** 2025-04-17 01:59:30.296280 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-17 01:59:30.296293 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-17 01:59:30.296305 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-17 01:59:30.296317 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-17 01:59:30.296329 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-17 01:59:30.296342 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-17 01:59:30.296354 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-17 01:59:30.296366 | orchestrator | 2025-04-17 01:59:30.296378 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-04-17 01:59:30.296435 | orchestrator | Thursday 17 April 2025 01:59:22 +0000 (0:00:01.368) 0:00:20.768 ******** 2025-04-17 01:59:30.296448 | orchestrator | ok: [testbed-node-0] 2025-04-17 01:59:30.296461 | orchestrator | 2025-04-17 01:59:30.296473 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-04-17 01:59:30.296486 | orchestrator | Thursday 17 April 2025 01:59:22 +0000 (0:00:00.402) 0:00:21.171 ******** 2025-04-17 01:59:30.296498 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-17 01:59:30.296511 | orchestrator | 2025-04-17 01:59:30.296523 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-04-17 01:59:30.296536 | orchestrator | Thursday 17 April 2025 01:59:23 +0000 (0:00:00.584) 0:00:21.756 ******** 2025-04-17 01:59:30.296548 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-04-17 01:59:30.296560 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-04-17 01:59:30.296572 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-04-17 01:59:30.296585 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-04-17 01:59:30.296597 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-04-17 01:59:30.296609 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-04-17 01:59:30.296620 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-04-17 01:59:30.296630 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-04-17 01:59:30.296640 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-04-17 01:59:30.296650 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-04-17 01:59:30.296660 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-04-17 01:59:30.296670 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-04-17 01:59:30.296680 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-04-17 01:59:30.296690 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-04-17 01:59:30.296700 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-04-17 01:59:30.296710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-04-17 01:59:30.296720 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-04-17 01:59:30.296730 | orchestrator | 2025-04-17 01:59:30.296740 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 01:59:30.296750 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-17 01:59:30.296768 | orchestrator | 2025-04-17 01:59:30.296778 | orchestrator | 2025-04-17 01:59:30.296788 | orchestrator | 2025-04-17 01:59:30.296803 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-17 01:59:30.296813 | orchestrator | Thursday 17 April 2025 01:59:29 +0000 (0:00:06.073) 0:00:27.829 ******** 2025-04-17 01:59:30.296823 | orchestrator | =============================================================================== 2025-04-17 01:59:30.296833 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.07s 2025-04-17 01:59:30.296844 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.86s 2025-04-17 01:59:30.296854 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.51s 2025-04-17 01:59:30.296869 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.47s 2025-04-17 01:59:33.346178 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.37s 2025-04-17 01:59:33.346310 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.09s 2025-04-17 01:59:33.346330 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.95s 2025-04-17 01:59:33.346345 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.78s 2025-04-17 01:59:33.346359 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.74s 2025-04-17 01:59:33.346373 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.73s 2025-04-17 01:59:33.346446 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.62s 2025-04-17 01:59:33.346462 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.58s 2025-04-17 01:59:33.346476 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.54s 2025-04-17 01:59:33.346490 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.50s 2025-04-17 01:59:33.346504 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.46s 2025-04-17 01:59:33.346518 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.45s 2025-04-17 01:59:33.346532 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.41s 2025-04-17 01:59:33.346546 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.40s 2025-04-17 01:59:33.346559 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.36s 2025-04-17 01:59:33.346573 | orchestrator | ceph-facts : set_fact build dedicated_devices from resolved symlinks ---- 0.33s 2025-04-17 01:59:33.346588 | orchestrator | 2025-04-17 01:59:30 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:33.346603 | orchestrator | 2025-04-17 01:59:30 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:33.346617 | orchestrator | 2025-04-17 01:59:30 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:33.346651 | orchestrator | 2025-04-17 01:59:33 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:33.347794 | orchestrator | 2025-04-17 01:59:33 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:33.351971 | orchestrator | 2025-04-17 01:59:33 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state STARTED 2025-04-17 01:59:33.355016 | orchestrator | 2025-04-17 01:59:33 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:33.357295 | orchestrator | 2025-04-17 01:59:33 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:33.358651 | orchestrator | 2025-04-17 01:59:33 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:33.358816 | orchestrator | 2025-04-17 01:59:33 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:36.411040 | orchestrator | 2025-04-17 01:59:36 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:36.412499 | orchestrator | 2025-04-17 01:59:36 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:36.413929 | orchestrator | 2025-04-17 01:59:36 | INFO  | Task 9bc82805-0d90-48ee-9d76-337cbc6de36d is in state SUCCESS 2025-04-17 01:59:36.415533 | orchestrator | 2025-04-17 01:59:36 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:36.416948 | orchestrator | 2025-04-17 01:59:36 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:36.419413 | orchestrator | 2025-04-17 01:59:36 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:36.426834 | orchestrator | 2025-04-17 01:59:36 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:39.473239 | orchestrator | 2025-04-17 01:59:36 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:39.473363 | orchestrator | 2025-04-17 01:59:39 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:39.473928 | orchestrator | 2025-04-17 01:59:39 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:39.473963 | orchestrator | 2025-04-17 01:59:39 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:39.476891 | orchestrator | 2025-04-17 01:59:39 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:39.477774 | orchestrator | 2025-04-17 01:59:39 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:39.479800 | orchestrator | 2025-04-17 01:59:39 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:42.541809 | orchestrator | 2025-04-17 01:59:39 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:42.541958 | orchestrator | 2025-04-17 01:59:42 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:42.543072 | orchestrator | 2025-04-17 01:59:42 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:42.544129 | orchestrator | 2025-04-17 01:59:42 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:42.545633 | orchestrator | 2025-04-17 01:59:42 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:42.546977 | orchestrator | 2025-04-17 01:59:42 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:42.549014 | orchestrator | 2025-04-17 01:59:42 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:45.595269 | orchestrator | 2025-04-17 01:59:42 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:45.595644 | orchestrator | 2025-04-17 01:59:45 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:45.598146 | orchestrator | 2025-04-17 01:59:45 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:45.598198 | orchestrator | 2025-04-17 01:59:45 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:45.599530 | orchestrator | 2025-04-17 01:59:45 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:45.601620 | orchestrator | 2025-04-17 01:59:45 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:45.603054 | orchestrator | 2025-04-17 01:59:45 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:48.654874 | orchestrator | 2025-04-17 01:59:45 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:48.657192 | orchestrator | 2025-04-17 01:59:48 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:48.658133 | orchestrator | 2025-04-17 01:59:48 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:48.659653 | orchestrator | 2025-04-17 01:59:48 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:48.663635 | orchestrator | 2025-04-17 01:59:48 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:48.664446 | orchestrator | 2025-04-17 01:59:48 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:48.666471 | orchestrator | 2025-04-17 01:59:48 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:51.715326 | orchestrator | 2025-04-17 01:59:48 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:51.715553 | orchestrator | 2025-04-17 01:59:51 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:51.717192 | orchestrator | 2025-04-17 01:59:51 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:51.719740 | orchestrator | 2025-04-17 01:59:51 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:51.721549 | orchestrator | 2025-04-17 01:59:51 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:51.723465 | orchestrator | 2025-04-17 01:59:51 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:51.725332 | orchestrator | 2025-04-17 01:59:51 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:51.725540 | orchestrator | 2025-04-17 01:59:51 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:54.774969 | orchestrator | 2025-04-17 01:59:54 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:54.775223 | orchestrator | 2025-04-17 01:59:54 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:54.775865 | orchestrator | 2025-04-17 01:59:54 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:54.777828 | orchestrator | 2025-04-17 01:59:54 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:54.778814 | orchestrator | 2025-04-17 01:59:54 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 01:59:54.780472 | orchestrator | 2025-04-17 01:59:54 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state STARTED 2025-04-17 01:59:57.828828 | orchestrator | 2025-04-17 01:59:54 | INFO  | Wait 1 second(s) until the next check 2025-04-17 01:59:57.828946 | orchestrator | 2025-04-17 01:59:57 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 01:59:57.829301 | orchestrator | 2025-04-17 01:59:57 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 01:59:57.829336 | orchestrator | 2025-04-17 01:59:57 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 01:59:57.829833 | orchestrator | 2025-04-17 01:59:57 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 01:59:57.831645 | orchestrator | 2025-04-17 01:59:57.831687 | orchestrator | 2025-04-17 01:59:57.831711 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-04-17 01:59:57.831758 | orchestrator | 2025-04-17 01:59:57.831783 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-04-17 01:59:57.831807 | orchestrator | Thursday 17 April 2025 01:58:53 +0000 (0:00:00.136) 0:00:00.136 ******** 2025-04-17 01:59:57.831821 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-17 01:59:57.831835 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-17 01:59:57.831849 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-17 01:59:57.831862 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-17 01:59:57.831877 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-17 01:59:57.831890 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-17 01:59:57.831904 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-17 01:59:57.831925 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-17 01:59:57.831939 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-17 01:59:57.831953 | orchestrator | 2025-04-17 01:59:57.831966 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-04-17 01:59:57.831980 | orchestrator | Thursday 17 April 2025 01:58:56 +0000 (0:00:02.864) 0:00:03.000 ******** 2025-04-17 01:59:57.831994 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-17 01:59:57.832008 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-17 01:59:57.832021 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-17 01:59:57.832035 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-17 01:59:57.832048 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-17 01:59:57.832062 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-17 01:59:57.832076 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-17 01:59:57.832089 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-17 01:59:57.832103 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-17 01:59:57.832117 | orchestrator | 2025-04-17 01:59:57.832130 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-04-17 01:59:57.832144 | orchestrator | Thursday 17 April 2025 01:58:56 +0000 (0:00:00.236) 0:00:03.237 ******** 2025-04-17 01:59:57.832158 | orchestrator | ok: [testbed-manager] => { 2025-04-17 01:59:57.832174 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-04-17 01:59:57.832189 | orchestrator | } 2025-04-17 01:59:57.832204 | orchestrator | 2025-04-17 01:59:57.832218 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-04-17 01:59:57.832234 | orchestrator | Thursday 17 April 2025 01:58:56 +0000 (0:00:00.154) 0:00:03.392 ******** 2025-04-17 01:59:57.832249 | orchestrator | changed: [testbed-manager] 2025-04-17 01:59:57.832264 | orchestrator | 2025-04-17 01:59:57.832279 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-04-17 01:59:57.832295 | orchestrator | Thursday 17 April 2025 01:59:29 +0000 (0:00:33.407) 0:00:36.799 ******** 2025-04-17 01:59:57.832311 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-04-17 01:59:57.832326 | orchestrator | 2025-04-17 01:59:57.832341 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-04-17 01:59:57.832357 | orchestrator | Thursday 17 April 2025 01:59:30 +0000 (0:00:00.442) 0:00:37.242 ******** 2025-04-17 01:59:57.832404 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-04-17 01:59:57.832422 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-04-17 01:59:57.832438 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-04-17 01:59:57.832454 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-04-17 01:59:57.832470 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-04-17 01:59:57.832497 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-04-17 02:00:00.861415 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-04-17 02:00:00.861504 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-04-17 02:00:00.861517 | orchestrator | 2025-04-17 02:00:00.861529 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-04-17 02:00:00.861539 | orchestrator | Thursday 17 April 2025 01:59:33 +0000 (0:00:02.899) 0:00:40.141 ******** 2025-04-17 02:00:00.861549 | orchestrator | skipping: [testbed-manager] 2025-04-17 02:00:00.861559 | orchestrator | 2025-04-17 02:00:00.861568 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-17 02:00:00.861578 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-17 02:00:00.861588 | orchestrator | 2025-04-17 02:00:00.861597 | orchestrator | Thursday 17 April 2025 01:59:33 +0000 (0:00:00.028) 0:00:40.170 ******** 2025-04-17 02:00:00.861607 | orchestrator | =============================================================================== 2025-04-17 02:00:00.861692 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 33.41s 2025-04-17 02:00:00.861707 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.90s 2025-04-17 02:00:00.861717 | orchestrator | Check ceph keys --------------------------------------------------------- 2.86s 2025-04-17 02:00:00.861726 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.44s 2025-04-17 02:00:00.861736 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.24s 2025-04-17 02:00:00.861746 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.15s 2025-04-17 02:00:00.861768 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-04-17 02:00:00.861778 | orchestrator | 2025-04-17 02:00:00.861788 | orchestrator | 2025-04-17 01:59:57 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:00.861798 | orchestrator | 2025-04-17 01:59:57 | INFO  | Task 1522883f-ac70-47e6-9b31-bc7e04adfdfb is in state SUCCESS 2025-04-17 02:00:00.861807 | orchestrator | 2025-04-17 01:59:57 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:00.861828 | orchestrator | 2025-04-17 02:00:00 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:00.862189 | orchestrator | 2025-04-17 02:00:00 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:00.862211 | orchestrator | 2025-04-17 02:00:00 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:00.862915 | orchestrator | 2025-04-17 02:00:00 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:00.863617 | orchestrator | 2025-04-17 02:00:00 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:00.865326 | orchestrator | 2025-04-17 02:00:00 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:03.907612 | orchestrator | 2025-04-17 02:00:00 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:03.907741 | orchestrator | 2025-04-17 02:00:03 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:03.908052 | orchestrator | 2025-04-17 02:00:03 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:03.908086 | orchestrator | 2025-04-17 02:00:03 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:03.908776 | orchestrator | 2025-04-17 02:00:03 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:03.909216 | orchestrator | 2025-04-17 02:00:03 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:03.910575 | orchestrator | 2025-04-17 02:00:03 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:06.939671 | orchestrator | 2025-04-17 02:00:03 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:06.939786 | orchestrator | 2025-04-17 02:00:06 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:06.941398 | orchestrator | 2025-04-17 02:00:06 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:06.944652 | orchestrator | 2025-04-17 02:00:06 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:06.946094 | orchestrator | 2025-04-17 02:00:06 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:06.946125 | orchestrator | 2025-04-17 02:00:06 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:06.947476 | orchestrator | 2025-04-17 02:00:06 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:06.947690 | orchestrator | 2025-04-17 02:00:06 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:09.979802 | orchestrator | 2025-04-17 02:00:09 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:09.980166 | orchestrator | 2025-04-17 02:00:09 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:09.981704 | orchestrator | 2025-04-17 02:00:09 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:09.987045 | orchestrator | 2025-04-17 02:00:09 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:09.987762 | orchestrator | 2025-04-17 02:00:09 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:09.989528 | orchestrator | 2025-04-17 02:00:09 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:13.033169 | orchestrator | 2025-04-17 02:00:09 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:13.033296 | orchestrator | 2025-04-17 02:00:13 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:13.033806 | orchestrator | 2025-04-17 02:00:13 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:13.034760 | orchestrator | 2025-04-17 02:00:13 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:13.036515 | orchestrator | 2025-04-17 02:00:13 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:16.066292 | orchestrator | 2025-04-17 02:00:13 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:16.066448 | orchestrator | 2025-04-17 02:00:13 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:16.066470 | orchestrator | 2025-04-17 02:00:13 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:16.066500 | orchestrator | 2025-04-17 02:00:16 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:16.068812 | orchestrator | 2025-04-17 02:00:16 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:16.069159 | orchestrator | 2025-04-17 02:00:16 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:16.069520 | orchestrator | 2025-04-17 02:00:16 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:16.069991 | orchestrator | 2025-04-17 02:00:16 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:16.070481 | orchestrator | 2025-04-17 02:00:16 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:19.092709 | orchestrator | 2025-04-17 02:00:16 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:19.092833 | orchestrator | 2025-04-17 02:00:19 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:19.093411 | orchestrator | 2025-04-17 02:00:19 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:19.093450 | orchestrator | 2025-04-17 02:00:19 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:19.093710 | orchestrator | 2025-04-17 02:00:19 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:19.094218 | orchestrator | 2025-04-17 02:00:19 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:19.094747 | orchestrator | 2025-04-17 02:00:19 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:22.131571 | orchestrator | 2025-04-17 02:00:19 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:22.131719 | orchestrator | 2025-04-17 02:00:22 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:22.133922 | orchestrator | 2025-04-17 02:00:22 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:22.133970 | orchestrator | 2025-04-17 02:00:22 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:22.135071 | orchestrator | 2025-04-17 02:00:22 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state STARTED 2025-04-17 02:00:22.135115 | orchestrator | 2025-04-17 02:00:22 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:22.137107 | orchestrator | 2025-04-17 02:00:22 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:25.170881 | orchestrator | 2025-04-17 02:00:22 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:25.171202 | orchestrator | 2025-04-17 02:00:25 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:25.171947 | orchestrator | 2025-04-17 02:00:25 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:25.172014 | orchestrator | 2025-04-17 02:00:25 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:25.172890 | orchestrator | 2025-04-17 02:00:25 | INFO  | Task 75ffeb31-d9ca-453c-9a25-0e256f92dcb5 is in state SUCCESS 2025-04-17 02:00:25.174652 | orchestrator | 2025-04-17 02:00:25 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:25.175380 | orchestrator | 2025-04-17 02:00:25 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:28.212044 | orchestrator | 2025-04-17 02:00:25 | INFO  | Wait 1 second(s) until the next check 2025-04-17 02:00:28.212164 | orchestrator | 2025-04-17 02:00:28 | INFO  | Task e0b8709f-1bcf-4f73-b727-9acc58049e77 is in state STARTED 2025-04-17 02:00:28.213309 | orchestrator | 2025-04-17 02:00:28 | INFO  | Task 9c8aeab8-cd25-436c-ae88-3bd522d5b460 is in state STARTED 2025-04-17 02:00:28.213380 | orchestrator | 2025-04-17 02:00:28 | INFO  | Task 7e444435-6cf0-4826-9288-1bc190736609 is in state STARTED 2025-04-17 02:00:28.215076 | orchestrator | 2025-04-17 02:00:28 | INFO  | Task 64ec39d2-0457-4b32-a4be-4c17224b35ac is in state STARTED 2025-04-17 02:00:28.215742 | orchestrator | 2025-04-17 02:00:28 | INFO  | Task 4df3343a-3480-4584-b457-c63f1f3e1ffd is in state STARTED 2025-04-17 02:00:28.216093 | orchestrator | 2025-04-17 02:00:28 | INFO  | Task 30afd575-0ac6-4769-b010-b713c07d424c is in state STARTED 2025-04-17 02:00:30.439202 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-17 02:00:30.447855 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-17 02:00:31.192984 | 2025-04-17 02:00:31.193161 | PLAY [Post output play] 2025-04-17 02:00:31.222932 | 2025-04-17 02:00:31.223087 | LOOP [stage-output : Register sources] 2025-04-17 02:00:31.313817 | 2025-04-17 02:00:31.314159 | TASK [stage-output : Check sudo] 2025-04-17 02:00:32.067768 | orchestrator | sudo: a password is required 2025-04-17 02:00:32.360615 | orchestrator | ok: Runtime: 0:00:00.017919 2025-04-17 02:00:32.377456 | 2025-04-17 02:00:32.377615 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-17 02:00:32.417178 | 2025-04-17 02:00:32.417405 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-17 02:00:32.510944 | orchestrator | ok 2025-04-17 02:00:32.522303 | 2025-04-17 02:00:32.522438 | LOOP [stage-output : Ensure target folders exist] 2025-04-17 02:00:32.974769 | orchestrator | ok: "docs" 2025-04-17 02:00:32.975161 | 2025-04-17 02:00:33.206776 | orchestrator | ok: "artifacts" 2025-04-17 02:00:33.422024 | orchestrator | ok: "logs" 2025-04-17 02:00:33.446602 | 2025-04-17 02:00:33.446842 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-17 02:00:33.491872 | 2025-04-17 02:00:33.492127 | TASK [stage-output : Make all log files readable] 2025-04-17 02:00:33.801714 | orchestrator | ok 2025-04-17 02:00:33.812998 | 2025-04-17 02:00:33.813143 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-17 02:00:33.859837 | orchestrator | skipping: Conditional result was False 2025-04-17 02:00:33.875938 | 2025-04-17 02:00:33.876091 | TASK [stage-output : Discover log files for compression] 2025-04-17 02:00:33.901626 | orchestrator | skipping: Conditional result was False 2025-04-17 02:00:33.920593 | 2025-04-17 02:00:33.920811 | LOOP [stage-output : Archive everything from logs] 2025-04-17 02:00:34.038843 | 2025-04-17 02:00:34.039080 | PLAY [Post cleanup play] 2025-04-17 02:00:34.081962 | 2025-04-17 02:00:34.082142 | TASK [Set cloud fact (Zuul deployment)] 2025-04-17 02:00:34.151606 | orchestrator | ok 2025-04-17 02:00:34.164091 | 2025-04-17 02:00:34.164233 | TASK [Set cloud fact (local deployment)] 2025-04-17 02:00:34.199446 | orchestrator | skipping: Conditional result was False 2025-04-17 02:00:34.213623 | 2025-04-17 02:00:34.213782 | TASK [Clean the cloud environment] 2025-04-17 02:00:34.846778 | orchestrator | 2025-04-17 02:00:34 - clean up servers 2025-04-17 02:00:35.782000 | orchestrator | 2025-04-17 02:00:35 - testbed-manager 2025-04-17 02:00:35.889931 | orchestrator | 2025-04-17 02:00:35 - testbed-node-4 2025-04-17 02:00:35.996428 | orchestrator | 2025-04-17 02:00:35 - testbed-node-5 2025-04-17 02:00:36.085715 | orchestrator | 2025-04-17 02:00:36 - testbed-node-2 2025-04-17 02:00:36.197426 | orchestrator | 2025-04-17 02:00:36 - testbed-node-1 2025-04-17 02:00:36.297519 | orchestrator | 2025-04-17 02:00:36 - testbed-node-0 2025-04-17 02:00:36.411096 | orchestrator | 2025-04-17 02:00:36 - testbed-node-3 2025-04-17 02:00:36.528366 | orchestrator | 2025-04-17 02:00:36 - clean up keypairs 2025-04-17 02:00:36.546447 | orchestrator | 2025-04-17 02:00:36 - testbed 2025-04-17 02:00:36.574802 | orchestrator | 2025-04-17 02:00:36 - wait for servers to be gone 2025-04-17 02:00:43.776671 | orchestrator | 2025-04-17 02:00:43 - clean up ports 2025-04-17 02:00:43.990485 | orchestrator | 2025-04-17 02:00:43 - 16ac224b-16d5-4e5e-8697-b25e28f8dd8d 2025-04-17 02:00:44.173121 | orchestrator | 2025-04-17 02:00:44 - 2211eaf3-57ce-43a6-9ee3-7f3964e36264 2025-04-17 02:00:44.400193 | orchestrator | 2025-04-17 02:00:44 - 35ec5557-34ff-4794-92ee-3ff62fcaed3b 2025-04-17 02:00:44.657281 | orchestrator | 2025-04-17 02:00:44 - 58982994-dd60-4369-88da-6f5764f54406 2025-04-17 02:00:45.001576 | orchestrator | 2025-04-17 02:00:45 - aca7850d-0fe3-4981-87de-312b710b5df6 2025-04-17 02:00:45.202398 | orchestrator | 2025-04-17 02:00:45 - ee9fb66b-aa86-4a92-bd68-645dbc833233 2025-04-17 02:00:45.407767 | orchestrator | 2025-04-17 02:00:45 - f7e45ef1-3323-4bfd-aa05-46c6aac5db31 2025-04-17 02:00:45.651886 | orchestrator | 2025-04-17 02:00:45 - clean up volumes 2025-04-17 02:00:45.806848 | orchestrator | 2025-04-17 02:00:45 - testbed-volume-5-node-base 2025-04-17 02:00:45.847946 | orchestrator | 2025-04-17 02:00:45 - testbed-volume-4-node-base 2025-04-17 02:00:45.889989 | orchestrator | 2025-04-17 02:00:45 - testbed-volume-2-node-base 2025-04-17 02:00:45.934650 | orchestrator | 2025-04-17 02:00:45 - testbed-volume-manager-base 2025-04-17 02:00:45.981616 | orchestrator | 2025-04-17 02:00:45 - testbed-volume-0-node-base 2025-04-17 02:00:46.021873 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-1-node-base 2025-04-17 02:00:46.068575 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-4-node-4 2025-04-17 02:00:46.110968 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-10-node-4 2025-04-17 02:00:46.150761 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-3-node-base 2025-04-17 02:00:46.203393 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-16-node-4 2025-04-17 02:00:46.246960 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-9-node-3 2025-04-17 02:00:46.286834 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-7-node-1 2025-04-17 02:00:46.331926 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-5-node-5 2025-04-17 02:00:46.379388 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-11-node-5 2025-04-17 02:00:46.421744 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-6-node-0 2025-04-17 02:00:46.465892 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-14-node-2 2025-04-17 02:00:46.516935 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-15-node-3 2025-04-17 02:00:46.559773 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-13-node-1 2025-04-17 02:00:46.602546 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-17-node-5 2025-04-17 02:00:46.645415 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-0-node-0 2025-04-17 02:00:46.687977 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-12-node-0 2025-04-17 02:00:46.735432 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-2-node-2 2025-04-17 02:00:46.783101 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-1-node-1 2025-04-17 02:00:46.828020 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-8-node-2 2025-04-17 02:00:46.881800 | orchestrator | 2025-04-17 02:00:46 - testbed-volume-3-node-3 2025-04-17 02:00:46.932499 | orchestrator | 2025-04-17 02:00:46 - disconnect routers 2025-04-17 02:00:47.039171 | orchestrator | 2025-04-17 02:00:47 - testbed 2025-04-17 02:00:47.995813 | orchestrator | 2025-04-17 02:00:47 - clean up subnets 2025-04-17 02:00:48.036896 | orchestrator | 2025-04-17 02:00:48 - subnet-testbed-management 2025-04-17 02:00:48.179978 | orchestrator | 2025-04-17 02:00:48 - clean up networks 2025-04-17 02:00:48.404492 | orchestrator | 2025-04-17 02:00:48 - net-testbed-management 2025-04-17 02:00:48.679995 | orchestrator | 2025-04-17 02:00:48 - clean up security groups 2025-04-17 02:00:48.715971 | orchestrator | 2025-04-17 02:00:48 - testbed-management 2025-04-17 02:00:48.822760 | orchestrator | 2025-04-17 02:00:48 - testbed-node 2025-04-17 02:00:48.924777 | orchestrator | 2025-04-17 02:00:48 - clean up floating ips 2025-04-17 02:00:48.951832 | orchestrator | 2025-04-17 02:00:48 - 81.163.193.47 2025-04-17 02:00:49.391488 | orchestrator | 2025-04-17 02:00:49 - clean up routers 2025-04-17 02:00:49.488627 | orchestrator | 2025-04-17 02:00:49 - testbed 2025-04-17 02:00:50.279649 | orchestrator | changed 2025-04-17 02:00:50.317879 | 2025-04-17 02:00:50.317996 | PLAY RECAP 2025-04-17 02:00:50.318050 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-17 02:00:50.318074 | 2025-04-17 02:00:50.433494 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-17 02:00:50.441396 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-17 02:00:51.145049 | 2025-04-17 02:00:51.145213 | PLAY [Base post-fetch] 2025-04-17 02:00:51.175216 | 2025-04-17 02:00:51.175368 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-17 02:00:51.252313 | orchestrator | skipping: Conditional result was False 2025-04-17 02:00:51.268875 | 2025-04-17 02:00:51.269077 | TASK [fetch-output : Set log path for single node] 2025-04-17 02:00:51.331193 | orchestrator | ok 2025-04-17 02:00:51.341223 | 2025-04-17 02:00:51.341352 | LOOP [fetch-output : Ensure local output dirs] 2025-04-17 02:00:51.829179 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/work/logs" 2025-04-17 02:00:52.106770 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/work/artifacts" 2025-04-17 02:00:52.389301 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4c4b8d7ec84f46ec8e44de13e39c3e5a/work/docs" 2025-04-17 02:00:52.408732 | 2025-04-17 02:00:52.408883 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-17 02:00:53.231482 | orchestrator | changed: .d..t...... ./ 2025-04-17 02:00:53.231893 | orchestrator | changed: All items complete 2025-04-17 02:00:53.231956 | 2025-04-17 02:00:53.835833 | orchestrator | changed: .d..t...... ./ 2025-04-17 02:00:54.452536 | orchestrator | changed: .d..t...... ./ 2025-04-17 02:00:54.487959 | 2025-04-17 02:00:54.488113 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-17 02:00:54.533726 | orchestrator | skipping: Conditional result was False 2025-04-17 02:00:54.541173 | orchestrator | skipping: Conditional result was False 2025-04-17 02:00:54.592933 | 2025-04-17 02:00:54.593062 | PLAY RECAP 2025-04-17 02:00:54.593145 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-17 02:00:54.593189 | 2025-04-17 02:00:54.717190 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-17 02:00:54.725260 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-17 02:00:55.442742 | 2025-04-17 02:00:55.442911 | PLAY [Base post] 2025-04-17 02:00:55.473275 | 2025-04-17 02:00:55.473431 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-17 02:00:56.401877 | orchestrator | changed 2025-04-17 02:00:56.440140 | 2025-04-17 02:00:56.440269 | PLAY RECAP 2025-04-17 02:00:56.440339 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-17 02:00:56.440437 | 2025-04-17 02:00:56.552301 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-17 02:00:56.560805 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-17 02:00:57.348184 | 2025-04-17 02:00:57.348361 | PLAY [Base post-logs] 2025-04-17 02:00:57.365103 | 2025-04-17 02:00:57.365251 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-17 02:00:57.824449 | localhost | changed 2025-04-17 02:00:57.828158 | 2025-04-17 02:00:57.828289 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-17 02:00:57.862440 | localhost | ok 2025-04-17 02:00:57.877504 | 2025-04-17 02:00:57.877794 | TASK [Set zuul-log-path fact] 2025-04-17 02:00:57.900022 | localhost | ok 2025-04-17 02:00:57.913845 | 2025-04-17 02:00:57.913965 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-17 02:00:57.945077 | localhost | ok 2025-04-17 02:00:57.954530 | 2025-04-17 02:00:57.954702 | TASK [upload-logs : Create log directories] 2025-04-17 02:00:58.456430 | localhost | changed 2025-04-17 02:00:58.461171 | 2025-04-17 02:00:58.461287 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-17 02:00:59.039792 | localhost -> localhost | ok: Runtime: 0:00:00.007642 2025-04-17 02:00:59.051969 | 2025-04-17 02:00:59.052173 | TASK [upload-logs : Upload logs to log server] 2025-04-17 02:00:59.638542 | localhost | Output suppressed because no_log was given 2025-04-17 02:00:59.642559 | 2025-04-17 02:00:59.642754 | LOOP [upload-logs : Compress console log and json output] 2025-04-17 02:00:59.714857 | localhost | skipping: Conditional result was False 2025-04-17 02:00:59.731868 | localhost | skipping: Conditional result was False 2025-04-17 02:00:59.749573 | 2025-04-17 02:00:59.749872 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-17 02:00:59.842333 | localhost | skipping: Conditional result was False 2025-04-17 02:00:59.843204 | 2025-04-17 02:00:59.854946 | localhost | skipping: Conditional result was False 2025-04-17 02:00:59.866839 | 2025-04-17 02:00:59.867037 | LOOP [upload-logs : Upload console log and json output]